Problem: If the decimal expansion of a contains all 10 digits (0,...,9), then the number of length n strings (shorted as n-strings) is greater than n+8.
If you've established the simple lemma, the solution is instant. Otherwise very impossible.
Lemma: The number Cn of n-strings is strictly greater than Cn−1, that of n−1-strings.
Actually, we always have Cn≥Cn−1: Every n−1-string corresponds to an n-string by continuing 1 more digit. The map is clearly injective. If Cn=Cn−1, it is bijective, meaning we have a way to continue uniquely, which means rationality. Rigidly, at least one of the n−1-strings occurs infinitely, but all digits after some n−1-string is totally determined by it. So if an n−1-string appears twice, it must appear every such distances, and so do the digits between.
(Further discussion: For a rational number, split it into a finite part, and a recurring part. If the minimal length of recurring string is n, then any m-string starting in the recurring part has exactly n variations, if m≥n. Additional variations brought by the finite irregular part is finite (regardless of m), as the starting point is finite. So Cn in this case is not decreasing but bounded. So it reaches some certain number and stays stable. In a purely recurring case, the number is exactly n (meaning afore-defined).
Now Cn≥Cn−1+1≥...≥9+1, as 10>9=1+8 holds, the problem is solved.
This may not be really hard. But the most important thing is to see the principle behind it.
(I WILL FURTHER COMPLETE THIS POST.)
Saturday, August 10, 2013
Friday, August 2, 2013
Discussion on Exercises of Commutative Algebra (I)
Units, nilpotents, and idempotents lift from A/N to A.
Proof: Units and nilpotents are obvious. In fact they lift to any of their representatives.
For idempotents, if x2=x∈A/N, then (1−x)x=0∈A/N, so (1−x)kxk=0∈A for sufficiently large k. And (1−x)k+xk=1−x+x=1∈A/N, so lifts to a unit (1−x)k+xk. Moreover, its inverse u=1∈A/N. So (ux)k(u(1−x))k=0,uxk+u(1−x)k=1∈A and ux=x,u(1−x)=1−x∈A/N.
This can be interpreted by sheaf theory, which is to be discussed in later posts.- Prime ideals of A1×...×An is of the form A1×...×pi×...×An, where pi is a prime ideal of Ai. What about countable products? (Profinite exists. Boolean Ring)
Proof: Multiplying by (0,...,1,...,0) we see I=I1×...×In. Then (A1×...×An)/I=A1/I1×...×An/In. It is a domain iff n−1 factors are 0 and the other is a domain. Actually the index set does not matter, as this is a product. Direct sums are of interest, and we will discuss it later.
The projection onto each factor corresponds geometrically to inclusion into the disjoint union. Multiplication by (0,...,1,...,0) means restrict the function to i-th component. The above demonstrates that ideals of a product works independently on factors, and so the subset is irreducible, iff it is restricted in one part, and irreducible there. - Let f:A→B be surjective. Then f(R(A))⊆R(B). The inclusion may be strict. What about N?
- If A is semilocal then the above is an equality.
- Since 1+f−1(b)a is invertible, so is 1+bf(a) for all b∈B. Let f be the quotient map from a domain A by some principle ideal generated by a power. Then R⊇N⊋(0)=f(R(A)).
For non-surjective morphisms, the two thing may have no relation at all. For example, let A be a local domain and f the embedding into B, its field of fractions. Then f(R(A))=R(A) is very large but R(B)=0.
Since prime ideals always pull back, we always have f(N(A))⊆N(B). For Jacobson radicals, the reason actually is the same since when f is surjective, maximal ideals pull back. This is like saying, if a function vanishes on every closed point, then it vanishes on every closed point of a closed subset. If it vanishes on every point, then its pullback vanishes on every point. In the polynomial case, since N=R, this reduces to trivial intuition. - Denote the kernel by I and the collection of maximal ideals M. It is equivalent to ∩m∈Mm+I=∩m⊇Im. Passing to A/∩m∈Mm≅∏m∈MA/m, it is equivalent to I=∩m⊇Im. This is a product of fields, so by 2. above, all ideals are products of the whole field or 0. I has 0 in the components of m⊇I while ki otherwise, which is exactly equal to ∩m⊇Im. This does not work when |M| is infinite, because then Chinese remainder theorem does not hold.
Continuing the discussion of a., this is saying if in addition closed points are finite, then a function vanishing on a subset of them must be induced by some function vanishing on all of them. Taking the example of Z, p vanishes on the single point Spec(Z/p2Z), but cannot be induced by some elements vanishing on all of SpecZ: such elements must be 0. This happens because we fail to let it vanish at all other primes simultaneously: infinite product does not make sense. However in Spec(∏p=2,3,5,...Z/p2Z), this holds, as we can always pull back to (2,3,5,...).
- An integral domain A is a UFD iff both of the following are satisfied:
- Every irreducible element is prime.
- Principle ideals satisfy A.C.C.
- Let {Pλ}λ∈Λ be a non-empty totally ordered (by inclusion) family of prime ideals. Then ∩Pλ is prime. Thus for any ideal I, there is some minimal prime ideal containing I.
Proof: If ab∈∩Pλ, then for all λ, either a,b is in Pλ. So the one of the collections of primes containing a and b respectively is not bounded below. Thus either of a,b is in the intersection. The corollary then follows from Zorn's lemma. - Let A be a ring, P1,...,Pr ideals. Suppose r−2 of them are prime. Then if I⊆∪iPi, then ∃i:I⊆Pi.
Proof: This is mysterious. Proof is not hard, but I do not know why. I will write when I know its meaning or usage. - In a ring A, if every ideal I⊊N contains a nonzero idempotent, then N=R.
Proof: Notice when A is reduced, this amounts to say if every ideal contains a nonzero idempotent, then R=0: If a≠0, then (a) contains a nonzero idempotent e=ka, with e(1−e)=0, so 1−ka is not a unit, and a∉R. The general case follows by passing to A/N. But this is more like an awkward exercise. - A local ring contains no idempotents ≠0,1.
Proof: Otherwise it would split as a direct product. By 2. above, it has at least two maximal ideals. Geometrically, a local picture cannot be a disjoint union. - The ideal Z of zero-divisors is a union of prime ideals.
Proof: Non-zero-divisors form a multiplicative set: If a,b are not zero-divisors, and abx=0, we have bx=0 and x=0. The primes in the localization with respect to this set corresponds exactly to primes consisting of zero-divisors. Everything is clear. This is similar to the case of non-nilpotent elements is out of some prime ideals, or that localization with respect to a prime ideal is local.
Thursday, August 1, 2013
Polynomials and Power Series (I)
Today we discuss something on polynomials.
Let A be a commutative unital ring, and A[x] the polynomial ring over A.
Let f=a0+a1x+...+anxn. If a1,a2,...,an are nilpotent, so will be f−a0. If moreover a0 is invertible, f will be invertible; if instead a0 is nilpotent, f is nilpotent. The converses are both true. For nilpotency, the highest degree term of fm is a sole amnxm, if f is nilpotent, an is forced to be; but then f−anxn is again nilpotent. For invertibility, immediately a0 is invertible; Suppose fg=1 with g=b0+b1x+...+brxr. Then anbr=0,anbr−1+an−1br=0,.... Multiplying the second by an, we get a2nbr−1=0; repeating this yields ar+1nb0=0, and b0 is invertible so an is nilpotent.
In particular, these implies the nilradical N=R in polynomial rings. If f∈R, then 1+xf is invertible. This means a0,...,an are all nilpotent, hence f nilpotent. In the proof of the Hilbert Nullstellensatz, we will see that this is valid also in prime quotients of polynomial rings.
If f is a zero-divisor, then a0,..,an are all zero-divisors. Indeed, if fg=0, then anbr=0, and fang=0, with degang<degg. Repeating this, eventually ang=0. This yields (f−anxn)g=0. Then aig=0,aibn=0,∀i.
A general version of Gauss's lemma holds: if (a0,...,an)=(1), then f is said to be primitive. If f,g are primitive, then so is fg. The proof is analogous: If (c0,...,cn)∈p for some maximal p, then in (A/p[x], we have fg=0. Since this is a domain, either f,g is 0, a contradiction.
The above is easily generalized to several variables (actually arbitrarily many, since a polynomial always involves only finite terms), keeping in mind A[X1,...,Xn]=A[X1,...,Xn−1][Xn].
The case of power series is different in many aspects. First, if f=a0+a1x+..., then f is invertible if and only if a0 is. This is because suppose g=b0+b1x+..., then fg=a0b0+(a0b1+a1b0)x+(a0b2+a1b1+a2b0)x2+... where ai can be solved inductively as long as a0b0=1. Second, although f nilpotent implies ai nilpotent for all i, via some similar induction focusing on the lowest degree term, the converse is not true. In fact, there are some restrictions on the vanishing degree: if fs=0, then as0=0, so (f−a0)2s=0; then a2s1=0, so (f−a1x)4s=0. In general a2isi=0. If the least si for asii=0 increases rapidly, making 2−isi→∞,i→∞, then f is not nilpotent. For example take si=3i,A=∏i∈Z+C[xi]/(xsii),ai=xi. The argument also applies in the polynomial case, but then n is finite.
If 1+gf is invertible iff 1+a0b0 is invertible. So f∈R(A[[x]]) iff a0∈R(A).
The ideal F(I) of f with a0∈I is an ideal of A[[x]]. Moreover A/I≅A[[x]]/F(I). So if I is prime, so is F(I); same for maximality. In fact, the same holds in A[x].
The above topic is from Atiyah, M. F.; MacDonald, I. G. (February 21, 1994). "Chapter 1: Rings and Ideals". Introduction to Commutative Algebra. Westview Press. p. 11. ISBN 978-0-201-40751-8.
The case of countable variables is also of interest. We will discuss this in later posts.
Over a Commutative Ring
Nilpotents and units are closely related. In a commutative unital ring R, if x nilpotent, a unit, then a+x is again a unit. If 1+xy is a unit for every y∈R, then x∈R, the Jacobson radical, approximately nilpotent.
Let f=a0+a1x+...+anxn. If a1,a2,...,an are nilpotent, so will be f−a0. If moreover a0 is invertible, f will be invertible; if instead a0 is nilpotent, f is nilpotent. The converses are both true. For nilpotency, the highest degree term of fm is a sole amnxm, if f is nilpotent, an is forced to be; but then f−anxn is again nilpotent. For invertibility, immediately a0 is invertible; Suppose fg=1 with g=b0+b1x+...+brxr. Then anbr=0,anbr−1+an−1br=0,.... Multiplying the second by an, we get a2nbr−1=0; repeating this yields ar+1nb0=0, and b0 is invertible so an is nilpotent.
In particular, these implies the nilradical N=R in polynomial rings. If f∈R, then 1+xf is invertible. This means a0,...,an are all nilpotent, hence f nilpotent. In the proof of the Hilbert Nullstellensatz, we will see that this is valid also in prime quotients of polynomial rings.
If f is a zero-divisor, then a0,..,an are all zero-divisors. Indeed, if fg=0, then anbr=0, and fang=0, with degang<degg. Repeating this, eventually ang=0. This yields (f−anxn)g=0. Then aig=0,aibn=0,∀i.
A general version of Gauss's lemma holds: if (a0,...,an)=(1), then f is said to be primitive. If f,g are primitive, then so is fg. The proof is analogous: If (c0,...,cn)∈p for some maximal p, then in (A/p[x], we have fg=0. Since this is a domain, either f,g is 0, a contradiction.
The above is easily generalized to several variables (actually arbitrarily many, since a polynomial always involves only finite terms), keeping in mind A[X1,...,Xn]=A[X1,...,Xn−1][Xn].
The case of power series is different in many aspects. First, if f=a0+a1x+..., then f is invertible if and only if a0 is. This is because suppose g=b0+b1x+..., then fg=a0b0+(a0b1+a1b0)x+(a0b2+a1b1+a2b0)x2+... where ai can be solved inductively as long as a0b0=1. Second, although f nilpotent implies ai nilpotent for all i, via some similar induction focusing on the lowest degree term, the converse is not true. In fact, there are some restrictions on the vanishing degree: if fs=0, then as0=0, so (f−a0)2s=0; then a2s1=0, so (f−a1x)4s=0. In general a2isi=0. If the least si for asii=0 increases rapidly, making 2−isi→∞,i→∞, then f is not nilpotent. For example take si=3i,A=∏i∈Z+C[xi]/(xsii),ai=xi. The argument also applies in the polynomial case, but then n is finite.
If 1+gf is invertible iff 1+a0b0 is invertible. So f∈R(A[[x]]) iff a0∈R(A).
The ideal F(I) of f with a0∈I is an ideal of A[[x]]. Moreover A/I≅A[[x]]/F(I). So if I is prime, so is F(I); same for maximality. In fact, the same holds in A[x].
The above topic is from Atiyah, M. F.; MacDonald, I. G. (February 21, 1994). "Chapter 1: Rings and Ideals". Introduction to Commutative Algebra. Westview Press. p. 11. ISBN 978-0-201-40751-8.
The case of countable variables is also of interest. We will discuss this in later posts.
Labels:
algebra,
commutative algebra,
polynomials,
power series
Subscribe to:
Posts (Atom)