Problem: If the decimal expansion of $a$ contains all $10$ digits ($0,...,9$), then the number of length $n$ strings (shorted as $n$-strings) is greater than $n+8$.
If you've established the simple lemma, the solution is instant. Otherwise very impossible.
Lemma: The number $C_n$ of $n$-strings is strictly greater than $C_{n-1}$, that of $n-1$-strings.
Actually, we always have $C_n \ge C_{n-1}$: Every $n-1$-string corresponds to an $n$-string by continuing 1 more digit. The map is clearly injective. If $C_n=C_{n-1}$, it is bijective, meaning we have a way to continue uniquely, which means rationality. Rigidly, at least one of the $n-1$-strings occurs infinitely, but all digits after some $n-1$-string is totally determined by it. So if an $n-1$-string appears twice, it must appear every such distances, and so do the digits between.
(Further discussion: For a rational number, split it into a finite part, and a recurring part. If the minimal length of recurring string is $n$, then any $m$-string starting in the recurring part has exactly $n$ variations, if $m \ge n$. Additional variations brought by the finite irregular part is finite (regardless of $m$), as the starting point is finite. So $C_n$ in this case is not decreasing but bounded. So it reaches some certain number and stays stable. In a purely recurring case, the number is exactly $n$ (meaning afore-defined).
Now $C_n \ge C_{n-1}+1 \ge ... \ge 9+1$, as $10 > 9=1+8$ holds, the problem is solved.
This may not be really hard. But the most important thing is to see the principle behind it.
(I WILL FURTHER COMPLETE THIS POST.)
Simple Math
D I Y
Saturday, August 10, 2013
Friday, August 2, 2013
Discussion on Exercises of Commutative Algebra (I)
Units, nilpotents, and idempotents lift from $A/\mathfrak{N}$ to $A$.
Proof: Units and nilpotents are obvious. In fact they lift to any of their representatives.
For idempotents, if $x^2=x\in A/\mathfrak{N}$, then $(1-x)x=0 \in A/\mathfrak{N}$, so $(1-x)^kx^k=0\in A$ for sufficiently large $k$. And $(1-x)^k+x^k=1-x+x=1\in A/\mathfrak{N}$, so lifts to a unit $(1-x)^k+x^k$. Moreover, its inverse $u=1\in A/\mathfrak{N}$. So $(ux)^k(u(1-x))^k=0,ux^k+u(1-x)^k=1\in A$ and $ux=x,u(1-x)=1-x\in A/\mathfrak{N}$.
This can be interpreted by sheaf theory, which is to be discussed in later posts.- Prime ideals of $A_1\times...\times A_n$ is of the form $A_1\times...\times p_i\times ... \times A_n$, where $p_i$ is a prime ideal of $A_i$. What about countable products? (Profinite exists. Boolean Ring)
Proof: Multiplying by $(0,...,1,...,0)$ we see $I=I_1\times...\times I_n$. Then $(A_1\times...\times A_n)/I=A_1/I_1\times...\times A_n/I_n$. It is a domain iff $n-1$ factors are $0$ and the other is a domain. Actually the index set does not matter, as this is a product. Direct sums are of interest, and we will discuss it later.
The projection onto each factor corresponds geometrically to inclusion into the disjoint union. Multiplication by $(0,...,1,...,0)$ means restrict the function to $i$-th component. The above demonstrates that ideals of a product works independently on factors, and so the subset is irreducible, iff it is restricted in one part, and irreducible there. - Let $f:A\rightarrow B$ be surjective. Then $f(\mathfrak{R}(A))\subseteq \mathfrak{R}(B)$. The inclusion may be strict. What about $\mathfrak{N}$?
- If $A$ is semilocal then the above is an equality.
- Since $1+f^{-1}(b) a$ is invertible, so is $1+b f(a)$ for all $b\in B$. Let $f$ be the quotient map from a domain $A$ by some principle ideal generated by a power. Then $\mathfrak{R}\supseteq \mathfrak{N}\supsetneq (0)=f(\mathfrak{R}(A))$.
For non-surjective morphisms, the two thing may have no relation at all. For example, let $A$ be a local domain and $f$ the embedding into $B$, its field of fractions. Then $f(\mathfrak{R}(A))=\mathfrak{R}(A)$ is very large but $\mathfrak{R}(B)=0$.
Since prime ideals always pull back, we always have $f(\mathfrak{N}(A))\subseteq \mathfrak{N}(B)$. For Jacobson radicals, the reason actually is the same since when $f$ is surjective, maximal ideals pull back. This is like saying, if a function vanishes on every closed point, then it vanishes on every closed point of a closed subset. If it vanishes on every point, then its pullback vanishes on every point. In the polynomial case, since $\mathfrak{N}=\mathfrak{R}$, this reduces to trivial intuition. - Denote the kernel by $I$ and the collection of maximal ideals $\mathcal{M}$. It is equivalent to $\cap_{\mathfrak{m} \in\mathcal{M}}\mathfrak{m} + I=\cap_{\mathfrak{m}\supseteq I}\mathfrak{m}$. Passing to $A/\cap_{\mathfrak{m} \in\mathcal{M}} \mathfrak{m}\cong \prod_{\mathfrak{m} \in\mathcal{M}} A/\mathfrak{m}$, it is equivalent to $I=\cap_{\mathfrak{m}\supseteq I}\mathfrak{m}$. This is a product of fields, so by 2. above, all ideals are products of the whole field or $0$. $I$ has $0$ in the components of $\mathfrak{m}\supseteq I$ while $k_i$ otherwise, which is exactly equal to $\cap_{\mathfrak{m}\supseteq I}\mathfrak{m}$. This does not work when $|\mathcal{M}|$ is infinite, because then Chinese remainder theorem does not hold.
Continuing the discussion of a., this is saying if in addition closed points are finite, then a function vanishing on a subset of them must be induced by some function vanishing on all of them. Taking the example of $\mathbb{Z}$, $p$ vanishes on the single point $\mathrm{Spec}(\mathbb{Z}/p^2\mathbb{Z})$, but cannot be induced by some elements vanishing on all of $\mathrm{Spec}\mathbb{Z}$: such elements must be $0$. This happens because we fail to let it vanish at all other primes simultaneously: infinite product does not make sense. However in $\mathrm{Spec}(\prod_{p=2,3,5,...}\mathbb{Z}/p^2\mathbb{Z})$, this holds, as we can always pull back to $(2,3,5,...)$.
- An integral domain $A$ is a UFD iff both of the following are satisfied:
- Every irreducible element is prime.
- Principle ideals satisfy A.C.C.
- Let $\{P_{\lambda}\}_{\lambda\in\Lambda}$ be a non-empty totally ordered (by inclusion) family of prime ideals. Then $\cap P_{\lambda}$ is prime. Thus for any ideal $I$, there is some minimal prime ideal containing $I$.
Proof: If $ab\in\cap P_{\lambda}$, then for all $\lambda$, either $a,b$ is in $P_{\lambda}$. So the one of the collections of primes containing $a$ and $b$ respectively is not bounded below. Thus either of $a,b$ is in the intersection. The corollary then follows from Zorn's lemma. - Let $A$ be a ring, $P_1,...,P_r$ ideals. Suppose $r-2$ of them are prime. Then if $I\subseteq \cup_i P_i$, then $\exists i:I\subseteq P_i$.
Proof: This is mysterious. Proof is not hard, but I do not know why. I will write when I know its meaning or usage. - In a ring $A$, if every ideal $I\subsetneq \mathfrak{N}$ contains a nonzero idempotent, then $\mathfrak{N}=\mathfrak{R}$.
Proof: Notice when $A$ is reduced, this amounts to say if every ideal contains a nonzero idempotent, then $\mathfrak{R}=0$: If $a\ne 0$, then $(a)$ contains a nonzero idempotent $e=ka$, with $e(1-e)=0$, so $1-ka$ is not a unit, and $a\notin R$. The general case follows by passing to $A/\mathfrak{N}$. But this is more like an awkward exercise. - A local ring contains no idempotents $\ne 0,1$.
Proof: Otherwise it would split as a direct product. By 2. above, it has at least two maximal ideals. Geometrically, a local picture cannot be a disjoint union. - The ideal $\mathfrak{Z}$ of zero-divisors is a union of prime ideals.
Proof: Non-zero-divisors form a multiplicative set: If $a,b$ are not zero-divisors, and $a b x=0$, we have $b x=0$ and $x =0$. The primes in the localization with respect to this set corresponds exactly to primes consisting of zero-divisors. Everything is clear. This is similar to the case of non-nilpotent elements is out of some prime ideals, or that localization with respect to a prime ideal is local.
Thursday, August 1, 2013
Polynomials and Power Series (I)
Today we discuss something on polynomials.
Let $A$ be a commutative unital ring, and $A[x]$ the polynomial ring over $A$.
Let $f=a_0+a_1x+...+a_n x^n$. If $a_1,a_2,...,a_n$ are nilpotent, so will be $f-a_0$. If moreover $a_0$ is invertible, $f$ will be invertible; if instead $a_0$ is nilpotent, $f$ is nilpotent. The converses are both true. For nilpotency, the highest degree term of $f^m$ is a sole $a_n^m x^m$, if $f$ is nilpotent, $a_n$ is forced to be; but then $f-a_n x^n$ is again nilpotent. For invertibility, immediately $a_0$ is invertible; Suppose $fg=1$ with $g=b_0+b_1x+...+b_r x^r$. Then $a_n b_r=0,a_n b_{r-1}+a_{n-1} b_r=0,...$. Multiplying the second by $a_n$, we get $a_n^2 b_{r-1}=0$; repeating this yields $a_n^{r+1} b_0=0$, and $b_0$ is invertible so $a_n$ is nilpotent.
In particular, these implies the nilradical $\mathfrak{N}=\mathfrak{R}$ in polynomial rings. If $f\in\mathfrak{R}$, then $1+xf$ is invertible. This means $a_0,...,a_n$ are all nilpotent, hence $f$ nilpotent. In the proof of the Hilbert Nullstellensatz, we will see that this is valid also in prime quotients of polynomial rings.
If $f$ is a zero-divisor, then $a_0,..,a_n$ are all zero-divisors. Indeed, if $fg=0$, then $a_n b_r=0$, and $f a_n g =0$, with $\mathrm{deg} a_n g<\mathrm{deg} g$. Repeating this, eventually $a_n g$=0. This yields $(f-a_n x^n) g=0$. Then $a_i g=0,a_i b_n=0,\forall i$.
A general version of Gauss's lemma holds: if $(a_0,...,a_n)=(1)$, then $f$ is said to be primitive. If $f,g$ are primitive, then so is $f g$. The proof is analogous: If $(c_0,...,c_n)\in\mathfrak{p}$ for some maximal $p$, then in $(A/\mathfrak{p}[x]$, we have $f g=0$. Since this is a domain, either $f,g$ is $0$, a contradiction.
The above is easily generalized to several variables (actually arbitrarily many, since a polynomial always involves only finite terms), keeping in mind $A[X_1,...,X_n]=A[X_1,...,X_{n-1}][X_n]$.
The case of power series is different in many aspects. First, if $f=a_0+a_1 x+...$, then $f$ is invertible if and only if $a_0$ is. This is because suppose $g=b_0+b_1 x+...$, then $f g=a_0 b_0 + (a_0 b_1+a_1 b_0)x+(a_0 b_2+a_1 b_1+a_2 b_0)x^2+...$ where $a_i$ can be solved inductively as long as $a_0 b_0=1$. Second, although $f$ nilpotent implies $a_i$ nilpotent for all $i$, via some similar induction focusing on the lowest degree term, the converse is not true. In fact, there are some restrictions on the vanishing degree: if $f^s=0$, then $a_0^s=0$, so $(f-a_0)^{2s}=0$; then $a_1^{2s}=0$, so $(f-a_1 x)^{4s}=0$. In general $a_i^{2^i s}=0$. If the least $s_i$ for $a_i^{s_i}=0$ increases rapidly, making $2^{-i} s_i\rightarrow\infty,i\rightarrow \infty$, then $f$ is not nilpotent. For example take $s_i=3^i,A=\prod_{i\in\mathbb{Z}^+}\mathbb{C}[x_i]/(x_i^{s_i}),a_i=x_i$. The argument also applies in the polynomial case, but then $n$ is finite.
If $1+g f$ is invertible iff $1+a_0 b_0$ is invertible. So $f\in\mathfrak{R}(A[[x]])$ iff $a_0\in\mathfrak{R}(A)$.
The ideal $F(\mathfrak{I})$ of $f$ with $a_0\in \mathfrak{I}$ is an ideal of $A[[x]]$. Moreover $A/\mathfrak{I}\cong A[[x]]/F(\mathfrak{I})$. So if $\mathfrak{I}$ is prime, so is $F(\mathfrak{I})$; same for maximality. In fact, the same holds in $A[x]$.
The above topic is from Atiyah, M. F.; MacDonald, I. G. (February 21, 1994). "Chapter 1: Rings and Ideals". Introduction to Commutative Algebra. Westview Press. p. 11. ISBN 978-0-201-40751-8.
The case of countable variables is also of interest. We will discuss this in later posts.
Over a Commutative Ring
Nilpotents and units are closely related. In a commutative unital ring $R$, if $x$ nilpotent, $a$ unit, then $a+x$ is again a unit. If $1+x y$ is a unit for every $y\in R$, then $x\in\mathfrak{R}$, the Jacobson radical, approximately nilpotent.
Let $f=a_0+a_1x+...+a_n x^n$. If $a_1,a_2,...,a_n$ are nilpotent, so will be $f-a_0$. If moreover $a_0$ is invertible, $f$ will be invertible; if instead $a_0$ is nilpotent, $f$ is nilpotent. The converses are both true. For nilpotency, the highest degree term of $f^m$ is a sole $a_n^m x^m$, if $f$ is nilpotent, $a_n$ is forced to be; but then $f-a_n x^n$ is again nilpotent. For invertibility, immediately $a_0$ is invertible; Suppose $fg=1$ with $g=b_0+b_1x+...+b_r x^r$. Then $a_n b_r=0,a_n b_{r-1}+a_{n-1} b_r=0,...$. Multiplying the second by $a_n$, we get $a_n^2 b_{r-1}=0$; repeating this yields $a_n^{r+1} b_0=0$, and $b_0$ is invertible so $a_n$ is nilpotent.
In particular, these implies the nilradical $\mathfrak{N}=\mathfrak{R}$ in polynomial rings. If $f\in\mathfrak{R}$, then $1+xf$ is invertible. This means $a_0,...,a_n$ are all nilpotent, hence $f$ nilpotent. In the proof of the Hilbert Nullstellensatz, we will see that this is valid also in prime quotients of polynomial rings.
If $f$ is a zero-divisor, then $a_0,..,a_n$ are all zero-divisors. Indeed, if $fg=0$, then $a_n b_r=0$, and $f a_n g =0$, with $\mathrm{deg} a_n g<\mathrm{deg} g$. Repeating this, eventually $a_n g$=0. This yields $(f-a_n x^n) g=0$. Then $a_i g=0,a_i b_n=0,\forall i$.
A general version of Gauss's lemma holds: if $(a_0,...,a_n)=(1)$, then $f$ is said to be primitive. If $f,g$ are primitive, then so is $f g$. The proof is analogous: If $(c_0,...,c_n)\in\mathfrak{p}$ for some maximal $p$, then in $(A/\mathfrak{p}[x]$, we have $f g=0$. Since this is a domain, either $f,g$ is $0$, a contradiction.
The above is easily generalized to several variables (actually arbitrarily many, since a polynomial always involves only finite terms), keeping in mind $A[X_1,...,X_n]=A[X_1,...,X_{n-1}][X_n]$.
The case of power series is different in many aspects. First, if $f=a_0+a_1 x+...$, then $f$ is invertible if and only if $a_0$ is. This is because suppose $g=b_0+b_1 x+...$, then $f g=a_0 b_0 + (a_0 b_1+a_1 b_0)x+(a_0 b_2+a_1 b_1+a_2 b_0)x^2+...$ where $a_i$ can be solved inductively as long as $a_0 b_0=1$. Second, although $f$ nilpotent implies $a_i$ nilpotent for all $i$, via some similar induction focusing on the lowest degree term, the converse is not true. In fact, there are some restrictions on the vanishing degree: if $f^s=0$, then $a_0^s=0$, so $(f-a_0)^{2s}=0$; then $a_1^{2s}=0$, so $(f-a_1 x)^{4s}=0$. In general $a_i^{2^i s}=0$. If the least $s_i$ for $a_i^{s_i}=0$ increases rapidly, making $2^{-i} s_i\rightarrow\infty,i\rightarrow \infty$, then $f$ is not nilpotent. For example take $s_i=3^i,A=\prod_{i\in\mathbb{Z}^+}\mathbb{C}[x_i]/(x_i^{s_i}),a_i=x_i$. The argument also applies in the polynomial case, but then $n$ is finite.
If $1+g f$ is invertible iff $1+a_0 b_0$ is invertible. So $f\in\mathfrak{R}(A[[x]])$ iff $a_0\in\mathfrak{R}(A)$.
The ideal $F(\mathfrak{I})$ of $f$ with $a_0\in \mathfrak{I}$ is an ideal of $A[[x]]$. Moreover $A/\mathfrak{I}\cong A[[x]]/F(\mathfrak{I})$. So if $\mathfrak{I}$ is prime, so is $F(\mathfrak{I})$; same for maximality. In fact, the same holds in $A[x]$.
The above topic is from Atiyah, M. F.; MacDonald, I. G. (February 21, 1994). "Chapter 1: Rings and Ideals". Introduction to Commutative Algebra. Westview Press. p. 11. ISBN 978-0-201-40751-8.
The case of countable variables is also of interest. We will discuss this in later posts.
Labels:
algebra,
commutative algebra,
polynomials,
power series
Wednesday, July 10, 2013
Generating Functions and Formal Power Series
Today we discuss some generating functions.
Problem 1: Give a finite set of positive integers $T$. Let $\mathfrak{T}_n$ be the collection of sequences $(t_1,t_2,...,t_m)$, such that $\sum_{i=1}^m t_m=n$, and each $t_i\in T$. Let $a_n=|\mathfrak{T}_n|$ for $n\ge1$ and $a_0=1$, and $f(x)=\sum_{n=0}^{\infty}a_n x^n$. Find out what is $f(x)$ explicitly.
Solution: It is not hard to note the recursive relation $a_n=\sum_{t\in T} a_{n-t}$ for $n\ge1$, if we set $a_i=0$ for negative $i$. So that $f(x)=1+\sum_{t\in T} x^t f(x)$ and $f(x)=1/(1-\sum_{t\in T} x^t)$, which is a rational function.
Variantion 1: If $T$ is infinite, what would happen? Would $f(x)$ still be rational?
We first analyze simple cases. If $T=\mathbb{N}^+$, it is expected that $f(x)=1+\sum_{t=1}^{\infty} x^t f(x)=1+f(x) x/(1-x)$, so that $f(x)=(1-x)/(1-2x)=1+\sum_{n=1}^{\infty} 2^{n-1} x^n$. Indeed, in this case, counting the sequences amounts to divide a sequence of $n$ objects arbitrarily. You can choose to divide between the $i$th and $i+1$th for $1\ge i\ge n-1$, so in all $2^{n-1}$ choices, justifying the expansion.
I think it is equivalent to $f$ being rational.
Theorem: $\mathbb{Q}_p(t)\cap\mathbb{Q}[[t]]=\mathbb{Q}(t)$
It is very interesting that this theorem is used for the rationality of $\zeta$-functions for algebraic varieties, which is part of the Weil conjectures.
Problem 1: Give a finite set of positive integers $T$. Let $\mathfrak{T}_n$ be the collection of sequences $(t_1,t_2,...,t_m)$, such that $\sum_{i=1}^m t_m=n$, and each $t_i\in T$. Let $a_n=|\mathfrak{T}_n|$ for $n\ge1$ and $a_0=1$, and $f(x)=\sum_{n=0}^{\infty}a_n x^n$. Find out what is $f(x)$ explicitly.
Solution: It is not hard to note the recursive relation $a_n=\sum_{t\in T} a_{n-t}$ for $n\ge1$, if we set $a_i=0$ for negative $i$. So that $f(x)=1+\sum_{t\in T} x^t f(x)$ and $f(x)=1/(1-\sum_{t\in T} x^t)$, which is a rational function.
Variantion 1: If $T$ is infinite, what would happen? Would $f(x)$ still be rational?
We first analyze simple cases. If $T=\mathbb{N}^+$, it is expected that $f(x)=1+\sum_{t=1}^{\infty} x^t f(x)=1+f(x) x/(1-x)$, so that $f(x)=(1-x)/(1-2x)=1+\sum_{n=1}^{\infty} 2^{n-1} x^n$. Indeed, in this case, counting the sequences amounts to divide a sequence of $n$ objects arbitrarily. You can choose to divide between the $i$th and $i+1$th for $1\ge i\ge n-1$, so in all $2^{n-1}$ choices, justifying the expansion.
I think it is equivalent to $f$ being rational.
Theorem: $\mathbb{Q}_p(t)\cap\mathbb{Q}[[t]]=\mathbb{Q}(t)$
It is very interesting that this theorem is used for the rationality of $\zeta$-functions for algebraic varieties, which is part of the Weil conjectures.
Subscribe to:
Posts (Atom)