Official

F - Tangent Addition Formula Editorial by evima


Hints: https://atcoder.jp/contests/arc166/editorial/7374


Let \(\mathbb{F}_p\) denote the set \(\{0,1,\ldots,p-1\}\) with the four arithmetic operations defined modulo \(p\). Also, let’s write \(\mathbb{F}_p^{\times}=\mathbb{F}_p\setminus \{0\}\).

The following equation that \(t\) should satisfy,

\[t(x+y)\bigl(1-t(x)t(y)\bigr)=t(x)+t(y), \tag{★}\]

will be referred to as (★) below.


[0] Background of the problem and editorial

The problem features the equation (★) satisfied by the function \(t(x) = \tan(x)\). Since \(\tan(x)\) is a real-valued function, we cannot interpret it directly as a \(\mathbb{F}_p\)-valued function. We are asked to investigate something like the “\(\mathbb{F}_p\)-valued version of \(\tan(x)\).”

Using the complex exponential function, for real \(x\) we can write:

\[\tan(x)=\dfrac{\sin(x)}{\cos(x)}=\dfrac{(e^{ix}-e^{-ix})/(2i)}{(e^{ix}+e^{-ix})/2} = i\cdot\dfrac{1-e^{2ix}}{1+e^{2ix}}.\]

In particular, we can see that \(\tan(x)=i\cdot \dfrac{1-r^x}{1+r^x}\) for some \(r\). Based on this, we will aim for the conclusion that there are \(i, r\) such that \(t(x) = i\cdot \dfrac{1-r^x}{1+r^x}\) also for \(\mathbb{F}_p\) values (although there are some exceptions).


[1] When \(p=2\)

The constant sequences \(t=(0,0,0,0,0,\ldots)\) and \(t=(1,1,1,1,1,\ldots)\) satisfy (★). Therefore, the answer to the problem is Yes.

By considering what the set of all \(x\) such that \(t(x)=0\) is, we can also see that \(t\) satisfying (★) can be classified into the following:

  • \(t=(0,0,0,0,0,\ldots)\).
  • \(t=(1,1,1,1,1,\ldots)\).
  • \(t\) determined by \(t(x) = \begin{cases}0 & (x\in n\mathbb{Z}) \\ 1 & (x\notin n\mathbb{Z})\end{cases}\) for some \(n\in\mathbb{Z}\).

[2] When \(p\equiv 1\pmod{4}\)

From \(p\equiv 1\pmod{4}\), there is \(I\in \mathbb{F}_p\) such that \(I^2=-1\).

By substituting \(x=y=0\) into (★), we find that \(t(0)\in \{0, I, -I\}\).

[2 - 1] When \(t(0)\in \{I,-I\}\)

By substituting \(y=0\) into (★), we find that \(t(x)\in \{I, -I\}\) holds for every \(x\). Furthermore, calculation with case distinction on the value of \(t(1)\) reveals that \(t\) is one of the following:

  • \(t = (I, I, I, I, I, \ldots)\)
  • \(t = (I, -I, -I, -I, -I, \ldots)\)
  • \(t = (-I, I, I, I, I, \ldots)\)
  • \(t = (-I, -I, -I, -I, -I, \ldots)\)

To solve the problem, it is sufficient to check if the constant sequence consisting of \(I\) or \(-I\) satisfies the condition.

[2 - 2] When \(t(0)=0\) and \(t(1)\in \{I,-I\}\)

From the equation obtained by setting \(y=1\) in (★), we find that \(t(x)=t(1)\in \{I,-I\}\implies t(x+1)=t(1)\). Therefore, in this case, we find that \(t\) can be one of the following:

  • \(t = (0, I, I, I, I, \ldots)\)
  • \(t = (0, -I, -I, -I, -I, \ldots)\)

[2 - 3] When \(t(0)=0\) and \(t(1)\notin\{I,-I\}\)

We can take \(r\in \mathbb{F}_p^{\times}\) such that \(t(1) = I\cdot \dfrac{1-r}{1+r}\).

From the equation obtained by setting \(y=1\) in (★), we find the following by calculating it in order.

  • If \(r^x\neq -1\) always holds, then \(t(x) = I\cdot \dfrac{1-r^x}{1+r^x}\).

  • If there is \(x\) such that \(r^x = -1\), then there is no \(t\) that satisfies (★).

For the former, it can also be easily confirmed by a simple calculation that the \(t(x)\) defined in this way actually satisfies (★).

In conclusion, in this case, we find that every \(t\) can be written as follows.

  • Take \(r\in \mathbb{F}_p^{\times}\) such that \(r^x\neq -1\) for any \(x\). Define \(t(x) = I\cdot \dfrac{1-r^x}{1+r^x}\).

[3] When \(p\equiv 3\pmod{4}\)

We consider proceeding with the discussion similar to [2]. First, since there is no \(I\in \mathbb{F}_p\) such that \(I^2=-1\) in \(\mathbb{F}_p\), the discussions corresponding to cases [2-1] and [2-2] are not necessary.

When trying to conduct a discussion corresponding to [2-3], the fact that there is no \(I \in \mathbb{F}_p\) such that \(I^2=-1\) poses a difficulty. Let’s consider a way to overcome this. We can use a method similar to the one we used in the beginning to express the real function \(\tan(x)\) using the imaginary unit and the complex exponential function.

Use of the Gauss integer ring (use of the finite field \(\mathbb{F}_{p^2}\))

There is no number \(i\) in \(\mathbb{F}_p\) that satisfies \(i^2=-1\). So, we consider a new number \(i\) that satisfies \(i^2=-1\), and let \(F = \{a+bi\mid a, b \in \mathbb{F}_p\}\).

Operations are defined on \(F\) so that the distributive law, etc., and \(i^2=-1\) hold. Specifically, for \(a,b,c,d\in\mathbb{F}_p\), we define addition and multiplication as follows:

  • \((a+bi)+(c+di) = (a+b)+(c+d)i\).
  • \((a+bi)(c+di) = (ac-bd)+(ad+bc)i\).

\(F\) will be a field. That is, the multiplicative inverse of a non-zero element exists (we use the fact that \(a^2+b^2\neq 0\) for \((a,b)\neq (0,0)\)).

For those who know finite fields, it is sufficient to simply consider \(F = \mathbb{F}_{p^2}\).

We correspond \(a\in \mathbb{F}_p\) to \(a+0i\in F\) and consider \(\mathbb{F}_p\subset F\). We write \(F^{\times}=F\setminus\{0\}\).


\(i=0+1i\in F\) satisfies \(i^2=-1\), and using this instead of \(I\) enables a discussion almost identical to [2-3].

We omit the details and state only the conclusion.

Conclusion

Every \(t\) that satisfies (★) can be written as follows.

  • Take \(r\in F^{\times}\) such that \(i\cdot \dfrac{1-r}{1+r} \in \mathbb{F}_p\) and \(r^x\neq -1\) for every \(x\). Define \(t(x) = i\cdot \dfrac{1-r^x}{1+r^x}\).

Here, let’s put the part \(i\cdot \dfrac{1-r}{1+r}\in \mathbb{F}_p\) into a more manageable form.

From the equation \((a+bi)^p=a-bi\), which can be shown from the binomial theorem and \(i^p=-i\), the following holds:

\[a+bi\in \mathbb{F}_p \iff b\neq 0 \iff a+bi= a-bi\iff a+bi=(a+bi)^p.\]

Therefore, we have the following rephrasing:

\[i\cdot \dfrac{1-r}{1+r}\in \mathbb{F}_p\iff i\cdot \dfrac{1-r}{1+r} = \biggl(i\cdot \dfrac{1-r}{1+r}\biggr)^p \iff \dfrac{1-r}{1+r}+\dfrac{1-r^p}{1+r^p}=0 \iff r^{p+1}=1.\]


[4] Determination of the existence of a solution

Now that we fully understand what \(t\) satisfies (★), let’s find the answer to the problem.

It suffices to determine the existence of \(t\) for the parts [2-3] and [3]. Let’s formulate these together.

Let \(a,b\) be given as input. Let \(F, I, m, c\) be as follows.

  • When \(p\equiv 1\pmod{4}\), let \(F=\mathbb{F}_p\), and let \(I\) be as defined in [2]. Let \(m=0\). Take \(c\in F^{\times}\) so that \(b=I\cdot \dfrac{1-c}{1+c}\) (if there is no such \(c\), there is no solution).

  • When \(p\equiv 3\pmod{4}\), let \(F\) be as defined in [3]. Let \(m=p+1\). Take \(c\in F^{\times}\) so that \(b=i\cdot \dfrac{1-c}{1+c}\).

The problem to be solved is the following:

【Problem】

You are given non-negative integers \(a,m\) and \(c\in F^{\times}\). Determine whether there is \(r\in F^{\times}\) that satisfies the following.

  • \(r^a = c\).
  • \(r^m = 1\).
  • \(r^x\neq -1\) for every \(x\).

Here, the following holds:

【Theorem】(Existence of primitive roots in finite fields)

\(F^{\times} = F\setminus\{0\}\) is a cyclic group. That is, there is \(g \in F^{\times}\) such that \(g^n=1\) and \(F^{\times}=\{g^k\mid 0\leq k < n\}\).

The proof that is probably most well-known as a proof of the existence of primitive roots in \(\mathbb{F}_p\) directly proves this theorem.

Outline of the proof
  1. Show that the factor theorem holds in the field. From this, show that for every $k$, the equation $x^k=1$ has at most $k$ solutions.
  2. For each $d\mid n$, if there is an element $x$ of order $d$, then by 1., there are no $d$-th roots of $1$ other than $1,x,\ldots,x^{d-1}$. From this, it can be seen that the number of elements of order $d$ is $0$ or $\phi(d)$ (where $\phi$ is the Euler function).
  3. Let $a_d$ be the number of elements of order $d$, then $n=\sum_{d\mid n}a_d$. From this property and 2., it can be seen that $a_d \in \{0,\phi(d)\}$. Combining this and the property of the Euler function $n=\sum_{d\mid n}\phi(d)$ reveals that $n_d=\phi(d)$ for every $d\mid n$. In particular, $a_n=\phi(n)>0$, so a primitive root exists.

Let \(n = |F^{\times}| = \begin{cases}p-1 & (p\equiv 1\pmod{4}) \\ p^2-1 & (p\equiv 3\pmod{4})\end{cases}\).

Take a primitive root \(g\) and let \(r=g^k\) (we only use \(g\) and \(k\) for discussion, and there is no need to calculate their specific values). Then, the condition \(r^m=1\) can be rewritten as \(n\mid km\). This is equivalent to \(k\) being a multiple of \(\dfrac{n}{\gcd(n,m)}\).

The condition \(r^x\neq -1\) can be rewritten as \(kx\not\equiv n/2\pmod{n}\). If we write \(n = 2^en' (2\nmid n')\), then \(r^x\neq-1\) for every \(x\) if and only if \(k\) is a multiple of \(2^e\).

Let \(L = \mathrm{lcm}\biggl(\dfrac{n}{\gcd(n,m)}, 2^e\biggr)\), and the problem is reduced to the following:

You are given \(a\) and \(c\). Determine whether there is an \(r\) such that \(c=r^a\) and \(r\) is an \(L\)-th power.

In other words, we need to determine whether \(c\) is an \(aL\)-th power.

By using again the fact that \(F^{\times}\) is a cyclic group, this condition is equivalent to \(c\) raised to the power of \(\dfrac{n}{\gcd(aL,n)}\) being \(1\).

In the end, the problem is reduced to the calculation of the power of \(c\), which can be done in \(O(\log p)\) time using the repeated squaring method.


[5] Reference

We can also discuss this problem using matrices, so let’s briefly mention it.

Once \(t(1)=c\) is fixed, the recurrence formula for \(t\) can be written in the form of a first-order fractional transformation \(t(x+1)=\dfrac{t(x)+c}{-ct(x)+1}\), so \(t(x)\) can be expressed using the \(x\)-th power of the matrix \(\begin{pmatrix}1&c\\-c&1\end{pmatrix}\).

In the end, we can reduce the problem to investigating the structure of the group obtained from the matrices of the form \(\begin{pmatrix}x&-y\\y&x\end{pmatrix}\) (\(x^2+y^2\neq 0\)) by identifying a matrix with itself multiplied by a constant.

This group will be a cyclic group of order \(p-1\) or \(p+1\) depending on whether \(p\equiv 1\) or \(3\pmod{4}\). The case \(p\equiv 1\pmod{4}\) can be seen from the diagonalization of matrices and the existence of primitive roots. The case \(p\equiv 3\pmod{4}\) can be proved from the fact that the set of matrices of the form \(\begin{pmatrix}x&-y\\y&x\end{pmatrix}\) forms a field, and that a finite subgroup of the multiplicative group of a field is cyclic (note that if we associate these matrices with Gauss integers \(x+yi\), this field is isomorphic to the \(F\) in the above explanation). The author thinks that even when using matrices in this way, the proof requires the existence theorem of primitive roots in \(\mathbb{F}_p\) or \(\mathbb{F}_{p^2}\) or similar results.

posted:
last update: