From: Dave Rusin
Date: Tue, 28 Jul 1998 11:02:15 -0500 (CDT)
To: mjoao@alf1.cii.fc.ul.pt
Subject: Re: Vector spaces
Newsgroups: sci.math.research
The answer to your question is "yes" if "automorphism" means a sufficiently
narrow class of objects. I think it must, although I didn't succeed in
proving that; perhaps you can finish the proof. But the real question is
in classifying the automorphisms Z of J, not so much investigating their
relationship with End(V).
In article you write:
>Let V be a vector space (finite or infinite dimension) and let J be the
>set of all endomorphisms of V whose rank is at most one. (That is, if k
>belongs to J then Dim( k(V) ) leq 1).
So J is the set of elements of the form ( v tensor phi) in (V tensor V*)
where V* is the dual space. If V is finite dimensional, we can pick a
basis and then write J as the set of matrices v1 * v2' for arbitrary
column vectors v1, v2. In both cases, the constituents of each element
of J are only defined up to a common scalar.
This describes J as a set in a vector space. The multiplicative structure
is given by (v tensor phi) . (w tensor psi) = (phi(w)).v tensor psi, or
in matrix terms, (v1*v2') . (v3*v4') = (v2 dot c3)v1 * v4'
>Denote by End(V) the semigroup of all endomorphisms of V.
That's M_n(R) in the finite-dimensional-with-basis case, of course.
In the case of a general vector space I suppose you will want to
restrict to continuous endomorphisms? (Here and in the definition of J.)
>The set J is closed under composition of endomorphisms and hence it is a
>semigroup. In fact it is a right ideal in the sense that for all f in
>End(V) and every k in J, we have that fk belongs to J.
>
>So, for every f in End(V), we can define a mapping as follows:
>
>G_f : J --> J ; G_f (k) = fk.
So G_f( v tensor phi ) = f(v) tensor phi . In the matrix case this
says G_M ( v1 * v2' ) = (M v1) * v2' for each square matrix M.
>On the other hand, as J is a semigroup, we can fix Z, an automorphism of
>the semigroup J.
It seems to me this is where your question really lies. You need to
ask, what _are_ the automorphisms Z of the semigroup J ? I believe the
correct answer is, they are of the form
(v tensor phi) -> ( L(v) tensor (phi o L^(-1)) ).
for some automorphism L of V; in matrix terms this says they
are of the form (v1 * v2') -> (A v1) * ((A^(-1))'v2)' for some
invertible matrix A. I _can_ show that the automorphisms must have
the form (v tensor phi) -> (F(v) tensor G(phi) ) for some _functions_
F : V -> V, G : V^* -> V^*; I can prove more too, but I didn't see how
to prove F must be _linear_. However, let me assume this to be true.
>Now, for every f in End(V) define a mapping as follows:
>
>F_f : J --> J ; F_f (k) = Z (f Z^-1 (k)).
>
>(Z^-1 is the inverse of the automorphism Z).
>Observe that Z^-1 (k) is an element of J and hence f Z^-1(k) is the
>composition of two endomorphisms of V.
So this one would be
F_f ( v tensor phi ) = ( F(f(F^(-1)(v))) tensor G^(-1)(G(phi)) )
and of course the second factor is just phi; assuming F is linear,
the first factor is the image of v under the linear map F o f o F^(-1).
>So, my question is this: for every f in End(V) is there a g in End(V) such
>that F_f = G_g ?
So this seems to be "yes": g = F^(-1) o f o F is also linear.
Thus the work remaining is to show the automorphisms of J have the
desired form. Here's a sketch of a partial proof (you'll notice I blithely
assume all real values are nonzero, etc.)
Suppose we are given an automorphism of J. We may write it in the form
(v tensor phi) -> (F(v,phi) tensor G(v,phi)), where F and G are only
well-defined up to a common scalar multiple; in particular, their images
[F(v,phi)] in P^(V) and [G(v,phi)] in P^1(V*) _are_ well-defined.
If this is indeed a homomorphism of algebras, then from the product law
(v tensor phi) . (w tensor psi) = (phi(w)).v tensor psi
we see [G(w,psi)] = [G((phi(w).v, psi)] which means, since w and v are
arbitrary, that [G(w,psi)] depends on psi alone. Writing the product law as
(v tensor phi) . (w tensor psi) = v tensor (phi(w)).psi
we see likewise that [F(v,phi)] depends only on v. Picking representative
vectors in each of the lines [F(v)] and [G(psi)] we then see the automorphism
has the form
v tensor phi -> ( F(v) tensor t(v,phi).G(phi) )
where, in order to be well-defined, we need for every c some constants with
F(c v) = lambda(c,v,phi) F(v)
G(1/c phi) = 1/lambda(c,v,phi) t(v,phi)/t(cv,phi/c) G(phi)
The first equation forces lambda to be independent of phi and by symmetry
it is also independent of v. Then the defining property of lambda makes it
a homomorphism R^*-> R^*; if we assume continuity we have lambda(c) = c^r
for some real r .
Next let's get rid of the t's.
Using the product law again, we get
(F(v) tensor t(v,phi) G(phi)) . (F(w) tensor t(w,psi) G(psi)) =
(F( phi(w).v ) tensor t(phi(w).v,psi) G(psi) );
the left side becomes
t(v,phi) G(phi)( F(w) ) . F(v) tensor t(w,psi) G(psi) =
t(v,phi) G(phi)( F(w) ) t(w,psi). F(v) tensor G(psi)
so the multiplicativity is satisfied iff
t(v,phi) G(phi)( F(w) ) t(w,psi). F(v) = t(phi(w).v,psi) F( phi(w).v )
Both sides are multiples of the same vector; we get this equation among
scalars:
G(phi)( F(w) ) t(v,phi) t(w,psi) = t(phi(w).v,psi) ( phi(w) )^r
Fix a pair (w0, phi0) with phi0(w0)=1; then if G(phi0)( F(w0) ) = c,
this equation shows t(v,psi) = c t(v,phi0) t(w0, psi) for all v, psi.
So if we re-express our automorphism as
(v,phi) -> (F'(v) tensor t'(v,psi) G'(psi) )
where G'(phi) = G(phi)*t(w0,phi)*c and F'(v) = F(v)*t(v,phi0), then
t'(v,phi) = t(v,phi)/t(v,phi0)/t(w0,phi) = 1 is _constant_.
So we have shown that our automorphism may be written in the desired form
(v tensor phi) -> (F(v) tensor G(w)) for some maps F:V->V, G:V*->V*
which satisfy the additional property that there is a constant r with
(*)... G(phi)( F(w) ) = (phi(w))^r
for all phi and w. From this I would like to show that F must be
linear. Actually I think it's then also true that r=1, G is linear, and
G = F^(-1)^* (that is, G(phi) = phi o F^(-1) ), although as the
earlier part of the proof shows, this information is not necessary for
your purposes.
Note that F need _not_ be linear if the above equation (*) is only assumed
to hold for a _single_ phi; e.g. if V = R^2 and F(w1,w2) = (w1^3, w2^3)
then (*) holds (with r=3) for both phi=projection to 1st coordinate and
phi=projection to second coordinate, as long as G(phi)=phi for these two
functionals phi. So to prove of the linearity of F we will have to use the
fact that (*) is to hold for _all_ phi. I don't see how to do this.
dave