Problems in Mathematics

From Wikibooks, open books for an open world
Jump to: navigation, search

Problems are listed in the increasing order of difficulty. When a problem is simply a mathematical statement, the readers are supposed to supply a proof. Answers are given (or will be given) to all of problems. This is mostly for the quality control; the answers allow contributors other than the initial writer of the problem to check the validity of the problems. In other words, the readers are strongly discouraged to see the answers before they successfully solved the problems themselves.

Commutative algebra[edit]

Problem: A finite integral domain is a field.

Answer

Let a be an element in the finite integral domain A. Then the map

x \mapsto ax: A \to A

is injective (since A is an integral domain) and is surjective by finiteness.\square

Problem: A polynomial has integer values for sufficiently large integer arguments if and only if it is a linear combination (over \mathbf{Z}) of binomial coefficients \binom{t}{n}.

Answer

Since \deg \binom{t}{n} = n, for dimensional reason, if f \in R[t], then we can write:

f(t) = a_0 + a_1 \binom{t}{1} + ... + a_d \binom{t}{d}, a_n \in \mathbf{R}.

Applying finite differential operator \Delta g(t) = g(t + 1) - g(t) to both sides d times, one finds that a_d is an integer. By induction, a_n are all integers then.\square

Problem: An integral domain is a PID if its prime ideals are principal. (Hint: apply Zorn's lemma to the set S of all non-principal prime ideals.)

Answer

Suppose, on the contrary, that S is nonempty. Then there is a a maximal element \mathfrak{i} \in S. We will reach a contradiction once we show \mathfrak{i} \in \operatorname{Spec}(A). For that end, let xy \in \mathfrak{i}. If x \not\in \mathfrak{i}, then, by maximality, (\mathfrak{i}, x) \not\in S. That is, it is principal; say,

(\mathfrak{i}, x) = (d).

Let \mathfrak{j} be an ideal consisting of a \in A such that ax \in \mathfrak{i}. It turns out that

\mathfrak{j}d = \mathfrak{i}.

Indeed, \mathfrak{j}d = (\mathfrak{ji}, \mathfrak{j}x) \subset \mathfrak{i}. Conversely, if z \in \mathfrak{i}, then z = z' d and z'x \in z'(d) \subset \mathfrak{i}. Thus, z \in \mathfrak{j}d. We now conclude that y \in \mathfrak{i}, for, otherwise, \mathfrak{j}d is principal. \square

Problem: A ring is noetherian if and only if its prime ideals are finitely generated. (Hint: Zorn's lemma.)

Answer

The direction (\Rightarrow) is obvious. For the converse, let S be the set of all proper ideals of A that are not finitely generated. We want to show S is empty then. Suppose not. Then, by Zorn's lemma, S contains a maximal element \mathfrak i. It follows that \mathfrak i is prime. To see that, let xy \in \mathfrak{i}. If x \not\in \mathfrak{i}, then, by maximality, (\mathfrak{i}, x) \not\in S. That is, it is finitely generated; say,

(\mathfrak{i}, x) = (i_1 + a_1 x, ..., i_n + a_n x).

Let \mathfrak{j} be an ideal consisting of a \in A such that ax \in \mathfrak{i}. It turns out that

\mathfrak{i} = (i_1, ..., i_n, \mathfrak{j}x).

In fact, if z \in \mathfrak{i}, then

z = b_1 (i_1 + a_1 x) + ... + b_n (i_n + a_n x)

Here, b_1 a_1 + ... + b_n a_n \in J. We conclude that y \in \mathfrak{i}, for, otherwise, \mathfrak{j} and thus \mathfrak{i} are finitely generated.\square

Problem: Every nonempty set of prime ideals has a minimal element with respect to inclusion.

Problem: If an integral domain A is algebraic over a field F, then A is a field.

Answer

Since A is an integral domain, it is a subring of some field. Let u \in A. Then u is invertible in F(u) \subset A. \square

Problem: Every two elements in a UFD have a gcd.

Problem: If f \in A[X] is a unit, then f - a_0 is nilpotent, where a_0 = f(0) is the constant term of f.

Answer

Let g = b_0 + b_1x + ... + b_m x^m be the inverse of f. Then a_n b_m = 0. Since a_n b_{m-1} + a_{n-1} b_m = 0, it follows {a_n}^2 b_{m-1} = 0. By obvious induction, for some r, we see {a_n}^r kills every coefficient of g; hence, g. Thus, {a_n}^r = {a_n}^r f g = 0, meaning a_n is nilpotent. Recall that the sum of a unit and a nilpotent element is a unit. Since a_n x^n - f is a unit, by applying the above argument, we see that a_{n-1} x^{n-1} is nilpotent. In the end, we conclude that a_0 - f is a sum of nilpotent elements; thus, nilpotent.

Problem: The nilradical and the Jacobson radical of A[X] coincide.

Answer

We only have to prove: if f is in the Jacobson radical, then f is nilpotent, since the converse is true for any ring. Recall that 1 - fg is a unit for every g. In particular, 1 - xf is unit. Now use the previous problem to conclude f is nilpotent.

Problem: Let A be a ring such that every ideal not contained in its nilradical contains an element e such that e^2 = e \ne 0. Then the nilradical and the Jacobson radical of A coincide.

Answer

In general, the nilradical is contained in the Jacobson radical. Suppose this inclusion is strict. Then by hypothesis there is a nonzero e such that e(1-e) = 0. Since 1-e is a unit, e = 0, a contradiction.

Problem: f \in A[[X]] is a unit if and only if the constant term of f is a unit.

Real analysis[edit]

Problem: \sqrt{3} + 2^{1/3} is irrational.

Answer

Let x = \sqrt{3} + 2^{1/3}. Then 2 = (x - \sqrt{3})^3. The equation can then be solved for \sqrt{3} \square

Problem: Is \sqrt{2}^\sqrt{2} irrational?

Problem: Compute \int_{-\infty}^\infty {\sin x \over x}

Problem: If \lim_{x \to c} f(x) + f'(x) exists, then \lim_{x \to c} f(x) exists and \lim_{x \to c} f'(x) = 0

Answer

Apply L'Hospital's rule to e^x f(x) \over e^x

Problem: Let f:\R \to [0,+\infty) nonvanishing and such that f(x)f''(x)\geq0, then \int_{-\infty}^{+\infty}f(x)^2dx=+\infty

Answer

Derive twice g(x)=f(x)^2, look at the asymptotic behaviour of g'(x).

Problem Let X be a complete metric space, and f:X \to X be a function such that f \circ f is a contraction. Then f admits a fixed point.

Answer

By Banach's fixed point theorem, f \circ f has a unique fixed point x_0; i.e., x_0 = (f \circ f)(x_0). But then

f(x_0) = (f \circ f)(f(x_0))

In other words, f(x_0) is also a fixed point of f \circ f. By uniqueness, x_0 = f(x_0). \square

Problem Let X be a compact metric space, and f:X \to X be such that

d(f(x), f(y)) < d(x, y)

for all x \ne y \in X. Then f admits a unique fixed point. (Do not use Banach's fixed point theorem.)

Answer

Let c = \inf \{ d(x, f(x)) | x \in X \}. By compactness, there is x_0 such that c = d(x_0, f(x_0)). If x_0 \ne f(x_0), then, by hypothesis, we have:

d(x_0, f(x_0)) \le d(f(x_0), f \circ f(x_0)) < d(x_0, f(x_0)),

which is absurd. Thus, d(x_0, f(x_0)) = 0. For uniqueness, suppose y_0 = f(y_0). If x_0 \ne y_0, then

d(x_0, y_0) \le d(x_0, f(x_0)) + d(f(x_0), f(y_0)) + d(f(y_0), y_0) = d(f(x_0), f(y_0)) < d(x_0, y_0),

which is absurd. Hence, x_0 is the unique fixed point. \square

Problem Let f:\R^2 \to \R^2 be such that

d(f(x), f(y)) \geq A d(x, y), \qquad A>1

then f admits a unique fixed point.

Problem Let X be a compact metric space, and f:X \to X be a contraction. Then

\bigcap_n^\infty f^n(X)

consists of exactly one point.

Answer

Since f is a contraction, it admits a fixed point x_0. Thus, x_0 \in \cap f^n(X). Let y \in \cap f^n(X). Then

y = f(x_1) = f^2(x_2) = f^3(x_3) = ...

for some sequence x_1, x_2, .... Let c be the Lipschitz constant of f. Now,

d(x_0, y) = d(x_0, f^n(x_n)) \le d(x_0, f^n(x_1)) + d(f^n(x_1), f^n(x_n)) \le d(x_0, f^n(x_1)) + c^n d(f(x_1), f(x_n))

which goes to 0 as n \to \infty since  d(f(x_1), f(x_n)) is bounded and c < 1 and since for any y \in X f^n(y) \to x_0 by Banach's fixed point theorem. \square

Problem: Every closed subset of \mathbf{R}^n is separable.

Answer

Let E_n be a countable dense subset of A \cap \overline{B}(0, k), and let

E = \bigcup_{k=1}^\infty E_k

Then A = \overline{E}. In fact, since E_k is a subset of A for any k, E \subset A and so \overline{E} \subset \overline{A} = A. Conversely, if x \in A, then for some k,

x \in A \cap \overline{B}(0, k) = \overline{E_k} \subset \overline{E}.

\square

Problem: Any connected nonempty subset of \mathbf{R} either consists of a single point or contains an irrational number.

Answer

Let E be a connected nonempty subset of \mathbf{R}. Then E is an interval with end-points a, b. If E has more than one point, then E contains a nonempty interval (a, b), which contains an irrational. \square

Problem: Let f: \mathbf{R} \to \mathbf{R} be a bounded function. f is continuous if and only if f has closed graph.

Problem: Let f: \mathbf{R} \to \mathbf{R} be a homeomorphism, then f is monotone.

Problem Let f:[0,1]^2 \to \mathbf{R} be a continuous function. Then

g(x) = \sup \{ f(x, y) | y \in [0, 1] \} \quad (x \in [0, 1])

is continuous.

Answer

Let \epsilon > 0. Since f is uniformly continuous, there is \delta > 0 so that

|f(x', y') - f(x, y)| < \epsilon whenever |(x', y') - (x, y)| < \delta

It follows that g(x) < \epsilon + g(x') as well as g(x') < \epsilon + g(x) when |x' - x| < \delta. Hence,

|g(x') - g(x)| < \epsilon. \square

Problem Let f, g: \mathbf{R} \to \mathbf{R} be continuous functions such that: f(g(x)) = g(f(x)) for every x. The equation f(f(x)) = g(g(x)) has a solution if and only if f(x) = g(x) has one.

Answer

(\Rightarrow) is trivial. For (\Leftarrow), suppose we have x so that f(f(x)) = g(g(x)). Define h(y) = f(y) - g(y) for y \in \mathbf{R}. Then

h(f(x)) = f(f(x)) - g(f(x)) = g(g(x)) - f(g(x)) = -h(g(x)).

Thus, h(f(x)) + h(g(x)) = 0. If h(f(x)) = 0, then we are done. If h(f(x)) < 0, then, since h(g(x)) > 0, by the intermediate value theorem, h(z) = 0 for some z. The same argument works for the case when h(f(x)) > 0. \square

Problem Suppose f: \mathbf{R} \to \mathbf{R} is uniformly continuous. Then there are constants a, b such that:

|f(x)| \le a|x| + b

for all x \in \mathbf{R}.

Answer

There exists \delta > 0 such that

|f(x) - f(y)| < 1 whenever |x-y| < \delta.

Let x \ge 0. Then n - 1 \le {x \over \delta} \le n for some integer n \ge 1. It follows:

|f(x)| \le |f(0)| + |f(0) - f(\delta)| + ... + |f((n-1) \delta) - f(x)| \le |f(0)| + n

Here, n < 1 + {1 \over \delta} |x|. The estimate for x < 0 is analogous. \square

Problem Let X be a compact metric space, and f: X \to X be an isometry: i.e., d(f(x), f(y)) = d(x, y). Then f is a bijection.

Answer

f is clearly injective. To show f surjective, let x \in X. Since X is compact, f^n(x) contains a convergent subsequence, say, f^{n_j}(x). Then

d(x, f^{n_j}(x)) = d(f^{n_k}(x), f^{n_{j+k}}(x)) \to 0

In other words, x is in the closure of f(X). Since the image of a closed set under an isometry is closed, we conclude: x \in f(X). \square

Problem Let p_n be a sequence of polynomials with degree ≤ some fixed D. If p_n converges pointwise to 0 on [0, 1], then p_n converges uniformly on [0, 1].

Answer

We first prove a weaker statement:

If p_n converges pointwise on all but finitely many points in [0,1], then p_n converges uniformly on all but finitely many points in [0,1].

We proceed by inducting on D. If D = 1, then the claim is obvious. Suppose it is true for D - 1. We write:

p_n(x) = p_n(x_0) + (x - x_0) q_n(x)

where x_0 is a point such that p_n(x_0) \to 0. Since the degree of q_n is strictly less than that of p_n, by inductive hypothesis, q_n converges uniformly on the complement of some finite subset F of [0, 1]. Thus, p_n converges on the complement of F \cup \{x_0\}. This completes the proof of the claim. By the claim, p_n converges uniformly except at some finitely many points. But since p_n converges pointwise everywhere, it converges uniformly everywhere. \square

Problem On a closed interval a monotone function has at most countably many discontinuous points.

Problem Prove that in Rn the relation B_r(x)\supset B_s(y) implies r > s and find a metric space when the implication doesn't hold.

Linear algebra[edit]

Throughout the section V denotes a finite-dimensional vector space over the field of complex numbers.

Problem Given an n, find a matrix with integer entries such that A \ne I but A^n = I

Answer

A permutation matrix. \square

Problem Let A be a real symmetric positive-definite matrix and b some fixed vector. Let \phi(x) = \langle Ax, x \rangle - 2 \langle x, b \rangle. Then Az = b if and only if \phi(z) \le \phi(x)

Answer

Note that A is invertible. Fix x \ne 0 and let f(t) = \phi(A^{-1} b + tx). Then

f'(t) = 2t\langle Ax, x \rangle

Since f''(0) > 0, t = 0 is the minimum of f. \square

Problem If \operatorname{tr}(AB) = 0 for all square matrices B, then A = 0

Answer

Take B = A^T. \square

Problem Let x be a square matrix over a field of characteristic zero. If \operatorname{tr}(x^k) = 0 for all k > 0, then x is nilpotent.

Answer

We may assume the field is algebraically closed. Suppose x has nonzero distinct eigenvalues \lambda_1, \lambda_2, ..., \lambda_n. The hypothesis then means that we have the system of linear equations:

\{ \lambda_1^k + \lambda_2^k + ... + \lambda_n^k = 0 \}_{1 \le k \le n}

Computing the determinant, we see the system has no nonzero solution, a contradiction.

Problem Let S, T be square matrices of the same size. Then ST and TS have the same eigenvalues.

Answer

Let \lambda be an eigenvalue of ST. If \lambda = 0, then 0 = \det(ST) = \det(TS). Thus,  \lambda is an eigenvalue of TS. If \lambda \ne 0, then STx = \lambda x for some nonzero x. Thus, (TS)Tx = \lambda Tx. Since Tx = 0 implies \lambda x = 0, a contradiction, Tx is an eigenvector. Hence, \lambda is an eigenvalue of TS. We thus proved that every eigenvalue of ST is an eigenvalue of TS. By the same argument, every eigenvalue of TS is an eigenvalue of ST. \square

Problem Let S, T be square matrices of the same size. Then ST and TS have the same eigenvalues with same multiplicity.

Answer

If S is invertible, then

ST = S(TS)S^{-1}

and thus

\operatorname{det}(TS - \lambda I) = \operatorname{det}(ST - \lambda I).

If S is not invertible, then S + tI is invertible when t > 0 is sufficiently small. Thus,

\operatorname{det}(T(S + tI) - \lambda I) = \operatorname{det}((S + tI)T - \lambda I)

and letting t \to 0 gives the same identity. In any case, TS and ST share the same eigenvalues with same multiplicity.\square

Problem Let A be a square matrix over complex numbers. A is a real symmetric matrix if and only if

\langle Ax, x \rangle

is real for every x.

Answer

(\Rightarrow) is obvious. (\Leftarrow) By hypothesis

\langle Ax, x \rangle = \langle A^*x, x \rangle

Now recall that the numerical radius

w(T) = \sup_{\|x\|=1} |\langle T x, x \rangle|

is a norm. \square

Problem Suppose the square matrix a_{ij} satisfies:

|a_{ii}| > \sum_{j \ne i} |a_{ij}|

for all i. Then A is invertible.

Answer

Suppose Ax = 0. Then, in particular, each component of Ax is zero; i.e.,

0 = \sum_j a_{ij}x_j = a_{ii}x_i + \sum_{j \ne i} a_{ij}x_j

The inequality | |a| - |b| | \le |a + b| thus gives:

|a_{ii}| |x_i| = |\sum_{j \ne i} a_{ij}x_j| \le \sum_{j \ne i} |a_{ij}| |x_j|

for all i. Pick k such that \max \{ |x_1|, |x_2|, ... |x_n| \} = |x_k|. Then, by hypothesis,

|a_{kk}| |x_k| \le (\sum_{j \ne i} |a_{ij}|) |x_k| < |a_{kk}||x_k|,

which is absurd, unless |x_k| = 0. Hence, x = 0. \square

Problem Let T, S \in \operatorname{End}(V). If V is finite-dimensional, then prove TS is invertible if and only if ST is invertible. Is this also true when V is infinite-dimensional?

Answer

For the first part, use determinant. For the second, consider a shift operator. \square

Problem: Let T, S be linear operators on V. Then

\operatorname{dim}\operatorname{ker}(TS) \le \operatorname{dim}\operatorname{ker}(S) + \operatorname{dim}\operatorname{ker}(T)
Answer

The map

S: \operatorname{ker}(TS) \to \operatorname{ker}T

is well-defined. Hence,

\operatorname{dim}\operatorname{ker}(T) \ge \operatorname{dim}\operatorname{ker}(TS) - \operatorname{dim}\operatorname{ker}(S|_{\operatorname{ker}(TS)}) \square

Problem Every matrix (over an arbitrary field) is similar to its transpose.

Answer

The shortest proof would be to use a Smith normal form: a matrix A is similar to another matrix "B if and only if XI - A and XI - B have the same Smith normal form. Evidently, this is the case if B is the transpose of A.

Problem Every nonzero eigenvalue of a skew-symmetric matrix is pure imaginary.

Problem If the transpose of a matrix A is zero, then A is similar to a matrix with the main diagonal consisting of only zeros.

Problem \operatorname{rank}(A^n) - \operatorname{rank}(A^{n-1}) \le \operatorname{rank}(A^{n+1}) - \operatorname{rank}(A^n) for any square matrix A.

Problem: Every square matrix is similar to an upper-triangular matrix.

Answer

Jordan form or Schur form.

Problem: Let A be a normal matrix. Then A^* is a polynomial in A.

Problem: Let A be a normal matrix. Then:

\|A\| = \max_{ |x|=1 } |(Ax \mid x)| = \sup_{\lambda \in \operatorname{Sp}(A)} |\lambda|

Problem: Let A be a square matrix. Then A \to 0 (in operator norm) if and only if the spectral radius of A < 1

Answer

The Spectral radius formula.

Problem: Let A be a square matrix. Then \|A\| = \|A^*A\|^{1/2}

Problem: T \mapsto \sup_{\|x\|=1} (Tx \mid x) is a norm for bounded operators T on a "complex" Hilbert space.

Answer

It is clear that the map is a seminorm. To see it is a norm, suppose (Tx \mid x) = 0 for all x. In particular,

0 = (Tx + y \mid x+y) = (Tx \mid y) + (Ty \mid x)
0 = (Tx + iy \mid x+ iy) = -i (Tx \mid y) + i (Ty \mid x)

Combing the two we get:

(Tx \mid y) = 0

for all x and y. Take y = Tx and we get Tx = 0 for all x.