Artikel

9.1: Matriks Transformasi Linear

9.1: Matriks Transformasi Linear



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Biarkan (T: V to W ) menjadi transformasi linear di mana ( func {dim} V = n ) dan ( func {dim} W = m ). Ideanya adalah untuk menukar vektor ( vect {v} ) di (V ) menjadi lajur di ( RR ^ n ), kalikan lajur itu dengan (A ) untuk mendapatkan lajur di ( RR ^ m ), dan ubah lajur ini kembali untuk mendapatkan (T ( vect {v}) ) di (W ).

Menukar vektor ke lajur adalah perkara mudah, tetapi satu perubahan kecil diperlukan. Sehingga kini pesanan vektor secara asasnya tidak penting. Walau bagaimanapun, dalam bahagian ini, kita akan membincangkan mengenai asas tertib ( { vect {b} _ {1}, vect {b} _ {2}, dots, vect {b} _ {n} } ), yang hanya menjadi asas di mana pesanan di vektor yang disenaraikan diambil kira. Oleh itu ( { vect {b} _ {2}, vect {b} _ {1}, vect {b} _ {3} } ) adalah berbeza mengarahkan asas dari ( { vect {b} _ {1}, vect {b} _ {2}, vect {b} _ {3} } ).

Sekiranya (B = { vect {b} _ {1}, vect {b} _ {2}, dots, vect {b} _ {n} } ) adalah asas tertib dalam vektor ruang (V ), dan jika [ vect {v} = v_1 vect {b} _1 + v_2 vect {b} _2 + cdots + v_n vect {b} _n, quad v_i in RR ] adalah vektor di (V ), maka nombor (ditentukan secara unik) (v_ {1}, v_ {2}, dots, v_ {n} ) disebut sebagai koordinat ( vect {v} ) berkenaan dengan asas (B ).

Koordinat Vektor (C_B ( vect {v}) ) dari ( vect {v} ) untuk asas (B ) 027894 The vektor koordinat dari ( vect {v} ) sehubungan dengan (B ) didefinisikan sebagai [C_B ( vect {v}) = (v_1 vect {b} _1 + v_2 vect {b} _2 + cdots + v_n vect {b} _n) = leftB begin {array} {c} v_1 v_2 vdots v_n end {array} kananB ]

Sebab untuk menulis (C_ {B} ( vect {v}) ) sebagai lajur dan bukannya baris akan menjadi jelas kemudian. Perhatikan bahawa (C_ {B} ( vect {b} _ {i}) = vect {e} _ {i} ) adalah lajur (i ) dari (I_ {n} ).

027904 Vektor koordinat untuk ( vect {v} = (2, 1, 3) ) berkenaan dengan asas yang disusun (B = {(1, 1, 0), (1, 0, 1), (0, 1, 1) } ) dari ( RR ^ 3 ) adalah (C_B ( vect {v}) = leftB begin {array} {c} 0 2 1 akhiri {array} rightB ) kerana [ vect {v} = (2, 1, 3) = 0 (1, 1, 0) + 2 (1, 0, 1) + 1 (0, 1, 1) ) ]

027908 Jika (V ) mempunyai dimensi (n ) dan (B = { vect {b} _ {1}, vect {b} _ {2}, titik, vect {b} _ {n} } ) adalah asas teratur (V ), transformasi koordinat (C_ {B}: V to RR ^ n ) adalah isomorfisme. Sebenarnya, (C_B ^ {- 1}: RR ^ n to V ) diberikan oleh [C_B ^ {- 1} leftB begin {array} {c} v_1 v_2 vdots v_n end {array} rightB = v_1 vect {b} _1 + v_2 vect {b} _2 + cdots + v_n vect {b} _n quad mbox {untuk semua} quad leftB mulakan {array} {c} v_1 v_2 vdots v_n end {array} rightB mbox {in} RR ^ n. ]

Pengesahan bahawa (C_ {B} ) adalah linear adalah Latihan [ex: ex9_1_13]. Sekiranya (T: RR ^ n to V ) adalah peta yang dilambangkan (C_B ^ {- 1} ) dalam teorema, seseorang mengesahkan (Latihan [ex: ex9_1_13]) bahawa (TC_ {B} = 1_ {V} ) dan (C_BT = 1 _ { RR ^ n} ). Perhatikan bahawa (C_ {B} ( vect {b} _ {j}) ) adalah lajur (j ) matriks identiti, jadi (C_ {B} ) membawa asas (B ) dengan asas piawai ( RR ^ n ), membuktikan sekali lagi bahawa ia adalah isomorfisme (Teorema [thm: 022044])

l5cm

Sekarang mari (T: V to W ) menjadi sebarang transformasi linear di mana ( func {dim} V = n ) dan ( func {dim} W = m ), dan biarkan (B = { vect {b} _ {1}, vect {b} _ {2}, dots, vect {b} _ {n} } ) dan (D ) disusun asas (V ) dan (W ), masing-masing. Kemudian (C_ {B}: V to RR ^ n ) dan (C_ {D}: W to RR ^ m ) adalah isomorfisme dan kami mempunyai keadaan yang ditunjukkan dalam rajah di mana (A ) adalah matriks (m kali n ) (akan ditentukan). Sebenarnya, komposit [C_DTC_B ^ {- 1}: RR ^ n to RR ^ m mbox {adalah transformasi linear} ] sehingga Teorema [thm: 005789] menunjukkan bahawa unik (m kali n ) matriks (A ) wujud sedemikian rupa sehingga [C_DTC_B ^ {- 1} = T_A, quad mbox {setara} C_DT = T_AC_B ] (T_ {A} ) bertindak dengan pendaraban kiri dengan ( A ), jadi syarat terakhir ini adalah [C_D [T ( vect {v})] = AC_B ( vect {v}) mbox {for all} vect {v} mbox {in} V ] Keperluan ini sepenuhnya menentukan (A ). Sesungguhnya, hakikat bahawa (C_ {B} ( vect {b} _ {j}) ) adalah lajur (j ) matriks identiti memberikan [ mbox {column} j mbox {of} A = AC_B ( vect {b} _j) = C_D [T ( vect {b} _j)] ] untuk semua (j ). Oleh itu, dari segi lajurnya, [A = leftB begin {array} {cccc} C_D [T ( vect {b} _1)] & C_D [T ( vect {b} _2)] & cdots & C_D [T ( vect {b} _n)] end {array} kananB ]

Matriks (M_ {DB} (T) ) dari (T: V hingga W ) untuk asas (D ) dan (B ) 027950 Ini dipanggil matriks daripada (T ) sepadan dengan pangkalan yang diperintahkan (B ) dan (D ), dan kami menggunakan notasi berikut: [M_ {DB} (T) = leftB begin {array} {cccc} C_D [T ( vect {b} _1)] & C_D [T ( vect {b} _2)] & cdots & C_D [T ( vect {b} _n)] end {array} kananB ]

Perbincangan ini diringkaskan dalam teorem penting berikut.

027955 Biarkan (T: V to W ) menjadi transformasi linear di mana ( func {dim} V = n ) dan ( func {dim} W = m ), dan biarkan (B = { vect {b} _ {1}, dots, vect {b} _ {n} } ) dan (D ) disusun asas (V ) dan (W ), masing-masing . Maka matriks (M_ {DB} (T) ) yang baru sahaja diberi adalah matriks unik ((m kali n ) (A ) yang memenuhi [C_DT = T_AC_B ] Oleh itu sifat menentukan (M_ {DB} (T) ) adalah [C_D [T ( vect {v})] = M_ {DB} (T) C_B ( vect {v}) mbox {untuk semua} vect {v} mbox {in} V ] Matriks (M_ {DB} (T) ) diberikan dari segi lajurnya oleh [M_ {DB} (T) = leftB begin {array} {cccc} C_D [ T ( vect {b} _1)] & C_D [T ( vect {b} _2)] & cdots & C_D [T ( vect {b} _n)] end {array} kananB ]

Fakta bahawa (T = C_D ^ {- 1} T_AC_B ) bermaksud bahawa tindakan (T ) pada vektor ( vect {v} ) di (V ) dapat dilakukan dengan mengambil pertama koordinat (iaitu, menerapkan (C_ {B} ) ke ( vect {v} )), kemudian mengalikan dengan (A ) (menerapkan (T_ {A} )), dan akhirnya menukar terhasil (m ) - tupel kembali ke vektor di (W ) (menerapkan (C_D ^ {- 1} )).

027973 Tentukan (T: vectspace {P} _ {2} to RR ^ 2 ) oleh (T (a + bx + cx ^ {2}) = (a + c, b - a - c) ) untuk semua polinomial (a + bx + cx ^ {2} ). Sekiranya (B = { vect {b} _ {1}, vect {b} _ {2}, vect {b} _ {3} } ) dan (D = { vect { d} _ {1}, vect {d} _ {2} } ) di mana [ vect {b} _1 = 1, vect {b} _2 = x, vect {b} _3 = x ^ 2 quad mbox {dan} quad vect {d} _1 = (1, 0), vect {d} _2 = (0, 1) ] hitung (M_ {DB} (T) ) dan sahkan Teorema [thm: 027955].

Kami mempunyai (T ( vect {b} _ {1}) = vect {d} _ {1} - vect {d} _ {2} ), (T ( vect {b} _ { 2}) = vect {d} _ {2} ), dan (T ( vect {b} _ {3}) = vect {d} _ {1} - vect {d} _ {2 } ). Oleh itu [M_ {DB} (T) = leftB begin {array} {ccc} C_D [T ( vect {b} _1)] & C_D [T ( vect {b} _2)] & C_D [T ( vect {b} _n)] end {array} rightB = leftB begin {array} {rrr} 1 & 0 & 1 -1 & 1 & -1 end {array} kananB ] Sekiranya ( vect {v} = a + bx + cx ^ {2} = a vect {b} _ {1} + b vect {b} _ {2} + c vect {b} _ {3 } ), kemudian (T ( vect {v}) = (a + c) vect {d} _ {1} + (b - a - c) vect {d} _ {2} ), jadi [C_D [T ( vect {v})] = leftB begin {array} {c} a + c b - a - c end {array} rightB = leftB begin {array} {rrr} 1 & 0 & 1 -1 & 1 & -1 end {array} kananB kiriB mula {array} {c} a b c end {array} kananB = M_ {DB} (T) C_B ( vect {v}) ] seperti yang ditegaskan oleh Teorema [thm: 027955].

Contoh seterusnya menunjukkan bagaimana menentukan tindakan transformasi dari matriksnya.

028008 Andaikan (T: vectspace {M} _ {22} ( RR) to RR ^ 3 ) adalah linear dengan matriks (M_ {DB} (T) = leftB begin {array} {rrrr } 1 & -1 & 0 & 0 0 & 1 & -1 & 0 0 & 0 & 1 & -1 end {array} kananB ) di mana [B = kiri { kiriB mulakan {array} {cc} 1 & 0 0 & 0 end {array} kananB, kiriB mula {array} {cc} 0 & 1 0 & 0 akhir {array} kananB, leftB begin {array} {cc} 0 & 0 1 & 0 end {array} rightB, leftB begin {array} {cc} 0 & 0 0 & 1 end {array} kananB kanan } mbox {dan} D = {(1, 0, 0), (0, 1, 0), (0, 0, 1) } ] Hitung (T ( vect {v} ) ) di mana ( vect {v} = leftB begin {array} {cc} a & b c & d end {array} kananB ).

Ideanya adalah untuk mengira (C_ {D} [T ( vect {v})] ) terlebih dahulu, dan kemudian dapatkan (T ( vect {v}) ). Kami mempunyai [C_D [T ( vect {v})] = M_ {DB} (T) C_B ( vect {v}) = leftB begin {array} {rrrr} 1 & -1 & 0 & 0 0 & 1 & -1 & 0 0 & 0 & 1 & -1 end {array} kananB kiriB bermula {array} {c} a b c d end { array} rightB = leftB begin {array} {c} a - b b - c c - d end {array} kananB ]

[ start {aligned} mbox {Oleh itu} T ( vect {v}) & = (a - b) (1, 0, 0) + (b - c) (0, 1, 0) + (c - d) (0, 0, 1) & = (a - b, b - c, c - d) end {aligned} ]

Dua contoh seterusnya akan dirujuk kemudian.

028025 Biarkan (A ) menjadi matriks (m kali n ), dan biarkan (T_ {A}: RR ^ n to RR ^ m ) menjadi transformasi matriks yang disebabkan oleh (A: T_ {A} ( vect {x}) = A vect {x} ) untuk semua lajur ( vect {x} ) di ( RR ^ n ). Sekiranya (B ) dan (D ) adalah asas piawai ( RR ^ n ) dan ( RR ^ m ), masing-masing (disusun seperti biasa), maka [M_ {DB} ( T_A) = A ] Dengan kata lain, matriks (T_ {A} ) yang sepadan dengan asas piawai adalah (A ) sendiri.

Tulis (B = { vect {e} _ {1}, titik, vect {e} _ {n} } ). Kerana (D ) adalah asas piawai ( RR ^ m ), mudah untuk mengesahkan bahawa (C_ {D} ( vect {y}) = vect {y} ) untuk semua lajur ( vect {y} ) di ( RR ^ m ). Oleh itu [M_ {DB} (T_A) = leftB begin {array} {cccc} T_A ( vect {e} _1) & T_A ( vect {e} _2) & cdots & T_A ( vect {e } _n) end {array} rightB = leftB begin {array} {cccc} A vect {e} _1 & A vect {e} _2 & cdots & A vect {e} _n end { array} rightB = A ] kerana (A vect {e} _ {j} ) adalah (j ) lajur ke (A ).

028048 Mari (V ) dan (W ) telah memesan pangkalan (B ) dan (D ), masing-masing. Biarkan ( func {dim} V = n ).

  1. Transformasi identiti (1_ {V}: V ke V ) mempunyai matriks (M_ {BB} (1_ {V}) = I_ {n} ).

  2. Transformasi sifar (0: V ke W ) mempunyai matriks (M_ {DB} (0) = 0 ).

Hasil pertama dalam Contoh [exa: 028048] adalah salah jika kedua-dua pangkalan (V ) tidak sama. Sebenarnya, jika (B ) adalah asas piawai ( RR ^ n ), maka asas (D ) dari ( RR ^ n ) dapat dipilih sehingga (M_ {DB } (1 _ { RR ^ n}) ) ternyata menjadi matriks terbalik yang kita mahukan (Latihan [cth: ex9_1_14]).

Dua teorema seterusnya menunjukkan bahawa komposisi transformasi linear serasi dengan pendaraban matriks yang sepadan.

028067

l4cm

Biarkan (V stackrel {T} { to} W stackrel {S} { to} U ) menjadi transformasi linear dan biarkan (B ), (D ), dan (E ) asas tertib terhingga (V ), (W ), dan (U ), masing-masing. Kemudian [M_ {EB} (ST) = M_ {ED} (S) cdot M_ {DB} (T) ]

Kami menggunakan harta tanah di Theorem [thm: 027955] tiga kali. Sekiranya ( vect {v} ) berada di (V ), [M_ {ED} (S) M_ {DB} (T) C_B ( vect {v}) = M_ {ED} (S) C_D [T ( vect {v})] = C_E [ST ( vect {v})] = M_ {EB} (ST) C_B ( vect {v}) ] Jika (B = { vect {e} _ {1}, titik, vect {e} _ {n} } ), kemudian (C_ {B} ( vect {e} _ {j}) ) ialah lajur (j ) daripada (I_ {n} ). Oleh itu mengambil ( vect {v} = vect {e} _ {j} ) menunjukkan bahawa (M_ {ED} (S) M_ {DB} (T) ) dan (M_ {EB} (ST ) ) mempunyai lajur (j ) yang sama. Teorema berikut.

028086 Biarkan (T: V to W ) menjadi transformasi linear, di mana ( func {dim} V = func {dim} W = n ). Berikut adalah setara.

  1. (T ) adalah isomorfisme.

  2. (M_ {DB} (T) ) tidak dapat diubah untuk semua pangkalan yang dipesan (B ) dan (D ) dari (V ) dan (W ).

  3. (M_ {DB} (T) ) tidak dapat dibalikkan untuk beberapa pasangan asas yang dipesan (B ) dan (D ) dari (V ) dan (W ).

Apabila ini berlaku, ([M_ {DB} (T)] ^ {- 1} = M_ {BD} (T ^ {- 1}) ).

(1) ( Kanan kanan ) (2). Kami mempunyai (V stackrel {T} { to} W stackrel {T ^ {- 1}} { to} V ), jadi Teorema [thm: 028067] dan Contoh [exa: 028048] memberi [ M_ {BD} (T ^ {- 1}) M_ {DB} (T) = M_ {BB} (T ^ {- 1} T) = M_ {BB} (1v) = I_n ] Begitu juga, (M_ {DB} (T) M_ {BD} (T ^ {- 1}) = I_ {n} ), membuktikan (2) (dan pernyataan terakhir dalam teorem).

(2) ( Kanan kanan ) (3). Ini jelas.

l4cm

(3) ( Kanan kanan ) (1). Katakan bahawa (T_ {DB} (T) ) tidak dapat dibalikkan untuk beberapa asas (B ) dan (D ) dan, untuk kemudahan, tulis (A = M_ {DB} (T) ). Kemudian kita telah (C_ {D} T = T_ {A} C_ {B} ) oleh Teorema [thm: 027955], jadi [T = (C_D) ^ {- 1} T_AC_B ] oleh Teorem [thm: 027908] di mana ((C_ {D}) ^ {- 1} ) dan (C_ {B} ) adalah isomorfisme. Oleh itu (1) berikut jika kita dapat menunjukkan bahawa (T_ {A}: RR ^ n to RR ^ n ) juga merupakan isomorfisme. Tetapi (A ) tidak dapat diubah oleh (3) dan seseorang mengesahkan bahawa (T_AT_ {A ^ {- 1}} = 1 _ { RR ^ n} = T_ {A ^ {- 1}} T_A ). Oleh itu (T_ {A} ) sememangnya tidak dapat dipastikan (dan ((T_ {A}) ^ {- 1} = T_ {A ^ {- 1}} )).

Dalam Bahagian [sec: 7_2] kita menentukan ( func {rank} ) transformasi linear (T: V to W ) oleh ( func {rank} T = func {dim} ( func {im} T) ). Lebih-lebih lagi, jika (A ) ada matriks (m kali n ) dan (T_ {A}: RR ^ n to RR ^ m ) adalah transformasi matriks, kami menunjukkan bahawa ( func {rank} (T_ {A}) = func {rank} A ). Oleh itu, mungkin tidak menghairankan bahawa ( func {rank} T ) sama dengan ( func {rank} ) mana-mana matriks (T ).

028139 Biarkan (T: V to W ) menjadi transformasi linear di mana ( func {dim} V = n ) dan ( func {dim} W = m ). Sekiranya (B ) dan (D ) adalah pangkalan yang disusun bagi (V ) dan (W ), maka ( func {rank} T = func {rank} [M_ {DB} ( T)] ).

Tulis (A = M_ {DB} (T) ) untuk kemudahan. Ruang lajur (A ) ialah (U = {A vect {x} mid vect {x} ) di ( RR ^ n } ). Ini bermaksud ( func {rank} A = func {dim} U ) dan sebagainya, kerana ( func {rank} T = func {dim} ( func {im} T) ), itu mencukupi untuk mencari isomorfisme (S: func {im} T to U ). Sekarang setiap vektor di ( func {im} T ) mempunyai bentuk (T ( vect {v}) ), ( vect {v} ) di (V ). Oleh Teorema [thm: 027955], (C_ {D} [T ( vect {v})] = AC_ {B} ( vect {v}) ) terletak di (U ). Oleh itu, tentukan (S: func {im} T to U ) oleh [S [T ( vect {v})] = C_D [T ( vect {v})] mbox {untuk semua vektor} T ( vect {v}) in func {im} T ] Fakta bahawa (C_ {D} ) adalah linear dan satu-ke-satu secara langsung menunjukkan bahawa (S ) adalah linear dan satu- ke-satu. Untuk melihat bahawa (S ) masuk, biarkan (A vect {x} ) menjadi ahli (U ), ( vect {x} ) di ( RR ^ n ) . Kemudian ( vect {x} = C_ {B} ( vect {v}) ) untuk beberapa ( vect {v} ) di (V ) kerana (C_ {B} ) ke . Oleh itu (A vect {x} = AC_ {B} ( vect {v}) = C_ {D} [T ( vect {v})] = S [T ( vect {v})] ) , jadi (S ) menuju. Ini bermaksud bahawa (S ) adalah isomorfisme.

028158 Tentukan (T: vectspace {P} _ {2} to RR ^ 3 ) oleh (T (a + bx + cx ^ {2}) = (a - 2b, 3c - 2a, 3c - 4b) ) untuk (a ), (b ), (c in RR ). Kira ( func {rank} T ).

Sejak ( func {rank} T = func {rank} [M_ {DB} (T)] ) untuk ada asas (B subseteq vectspace {P} _ {2} ) dan (D subseteq RR ^ 3 ), kami memilih yang paling senang: (B = {1, x, x ^ { 2} } ) dan (D = {(1, 0, 0), (0, 1, 0), (0, 0, 1) } ). Kemudian (M_ {DB} (T) = leftB begin {array} {ccc} C_ {D} [T (1)] & C_ {D} [T (x)] & C_ {D} [T ( x ^ {2})] end {array} rightB = A ) di mana [A = leftB begin {array} {rrr} 1 & -2 & 0 -2 & 0 & 3 0 & -4 & 3 end {array} kananB. quad mbox {Sejak} A to leftB begin {array} {rrr} 1 & -2 & 0 0 & -4 & 3 0 & -4 & 3 end {array} kananB ke leftB begin {array} {rrr} 1 & -2 & 0 0 & 1 & - frac {3} {4} 0 & 0 & 0 end {array} kananB ] yang kita ada ( func {rank} A = 2 ). Oleh itu ( func {rank} T = 2 ) juga.

Kami menyimpulkan dengan contoh yang menunjukkan bahawa matriks transformasi linier dapat dibuat sangat sederhana dengan pilihan dua asas yang teliti.

028178 Biarkan (T: V to W ) menjadi transformasi linear di mana ( func {dim} V = n ) dan ( func {dim} W = m ). Pilih asas yang dipesan (B = { vect {b} _ {1}, dots, vect {b} _ {r}, vect {b} _ {r + 1}, titik, vect {b} _ {n} } ) dari (V ) di mana ( { vect {b} _ {r + 1}, titik, vect {b} _ {n} } ) adalah asas ( func {ker} T ), mungkin kosong. Maka ( {T ( vect {b} _ {1}), dots, T ( vect {b} _ {r}) } ) adalah asas ( func {im} T ) oleh Theorem [thm: 021572], jadi panjangkan ke dasar yang disusun (D = {T ( vect {b} _ {1}), dots, T ( vect {b} _ {r}) , vect {f} _ {r + 1}, titik, vect {f} _ {m} } ) dari (W ). Kerana (T ( vect {b} _ {r + 1}) = cdots = T ( vect {b} _ {n}) = vect {0} ), kita mempunyai [M_ {DB} (T) = leftB begin {array} {cccccc} C_D [T ( vect {b} _1)] & cdots & C_D [T ( vect {b} _r)] & C_D [T ( vect { b} _ {r + 1})] & cdots & C_D [T ( vect {b} _n)] end {array} rightB = leftB begin {array} {cc} I_r & 0 0 & 0 end {array} rightB ] Secara kebetulan, ini menunjukkan bahawa ( func {rank} T = r ) oleh Teorem [thm: 028139].

Latihan untuk 1

penyelesaian

1

Dalam setiap kes, cari koordinat ( vect {v} ) berkenaan dengan asas (B ) ruang vektor (V ).

  1. (V = vectspace {P} _2 ), ( vect {v} = 2x ^ 2 + x - 1 ), (B = {x + 1, x ^ 2, 3 } )

  2. (V = vectspace {P} _2 ), ( vect {v} = ax ^ 2 + bx + c ), (B = {x ^ 2, x + 1, x + 2 } )

  3. (V = RR ^ 3 ), ( vect {v} = (1, -1, 2) ),
    (B = {(1, -1, 0), (1, 1, 1), (0, 1, 1) } )

  4. (V = RR ^ 3 ), ( vect {v} = (a, b, c) ),
    (B = {(1, -1, 2), (1, 1, -1), (0, 0, 1) } )

  5. (V = vectspace {M} _ {22} ), ( vect {v} = leftB begin {array} {rr} 1 & 2 -1 & 0 end {array} kananB ),
    (B = kiri { leftB begin {array} {rr} 1 & 1 0 & 0 end {array} rightB, leftB begin {array} {rr} 1 & 0 1 & 0 end {array} kananB, leftB begin {array} {rr} 0 & 0 1 & 1 end {array} kananB, kiriB mula {array} {rr} 1 & 0 0 & 1 end {array} kananB kanan } )

  1. ( leftB begin {array} {c} a 2b - c c - b end {array} kananB )

  2. ( frac {1} {2} leftB begin {array} {c} a - b a + b -a + 3b + 2c end {array} kananB )

Katakan (T: vectspace {P} _ {2} to RR ^ 2 ) adalah transformasi linear. Sekiranya (B = {1, x, x ^ {2} } ) dan (D = {(1, 1), (0, 1) } ), cari tindakan (T ) diberikan:

  1. (M_ {DB} (T) = leftB begin {array} {rrr} 1 & 2 & -1 -1 & 0 & 1 end {array} kananB )

  2. (M_ {DB} (T) = leftB begin {array} {rrr} 2 & 1 & 3 -1 & 0 & -2 end {array} kananB )

  1. Oleh itu [ start {aligned} T ( vect {v}) & = (2a + b + 3c) (1, 1) + (-a - 2c) (0, 1) & = (2a + b + 3c, a + b + c). End {aligned} ]

Dalam setiap kes, cari matriks transformasi linier (T: V ke W ) yang sepadan dengan asas (B ) dan (D ) dari (V ) dan (W ), masing-masing .

  1. (T: vectspace {M} _ {22} to RR ), (T (A) = func {tr} A );
    (B = kiri { leftB begin {array} {rr} 1 & 0 0 & 0 end {array} rightB, leftB begin {array} {rr} 0 & 1 0 & 0 end {array} kananB, leftB begin {array} {rr} 0 & 0 1 & 0 end {array} kananB, kiriB mula {array} {rr} 0 & 0 0 & 1 end {array} kananB kanan } ), (D = {1 } )

  2. (T: vectspace {M} _ {22} to vectspace {M} _ {22} ), (T (A) = A ^ T );
    (B = D = kiri { leftB begin {array} {rr} 1 & 0 0 & 0 end {array} rightB, leftB begin {array} {rr} 0 & 1 0 & 0 end {array} rightB, leftB begin {array} {rr} 0 & 0 1 & 0 end {array} rightB, leftB begin {array} {rr} 0 & 0 0 & 1 end {array} kananB kanan } )

  3. (T: vectspace {P} _2 to vectspace {P} _3 ), (T [p (x)] = xp (x) ); (B = {1, x, x ^ 2 } ) dan (D = {1, x, x ^ 2, x ^ 3 } )

  4. (T: vectspace {P} _2 to vectspace {P} _2 ), (T [p (x)] = p (x + 1) );
    (B = D = {1, x, x ^ 2 } )

  1. ( leftB begin {array} {cccc} 1 & 0 & 0 & 0 0 & 0 & 1 & 0 0 & 1 & 0 & 0 0 & 0 & 0 & 1 end {array } kananB )

  2. ( leftB begin {array} {ccc} 1 & 1 & 1 0 & 1 & 2 0 & 0 & 1 end {array} kananB )

Dalam setiap kes, cari matriks
(T: V to W ) masing-masing sesuai dengan pangkalan (B ) dan (D ), dan menggunakannya untuk menghitung (C_ {D} [T ( vect {v})] ), dan dengan itu (T ( vect {v}) ).

  1. (T: RR ^ 3 hingga RR ^ 4 ), (T (x, y, z) = (x + z, 2z, y - z, x + 2y) ); Piawaian (B ) dan (D ); ( vect {v} = (1, -1, 3) )

  2. (T: RR ^ 2 to RR ^ 4 ), (T (x, y) = (2x - y, 3x + 2y, 4y, x) ); (B = {(1, 1), (1, 0) } ), (D ) standard; ( vect {v} = (a, b) )

  3. (T: vectspace {P} _2 to RR ^ 2 ), (T (a + bx + cx ^ 2) = (a + c, 2b) );
    (B = {1, x, x ^ 2 } ), (D = {(1, 0), (1, -1) } );
    ( vect {v} = a + bx + cx ^ 2 )

  4. (T: vectspace {P} _2 to RR ^ 2 ), (T (a + bx + cx ^ 2) = (a + b, c) );
    (B = {1, x, x ^ 2 } ), (D = {(1, -1), (1, 1) } );
    ( vect {v} = a + bx + cx ^ 2 )

  5. (T: vectspace {M} _ {22} to RR ), (T leftB begin {array} {cc} a & b c & d end {array} rightB = a + b + c + d );
    (B = kiri { leftB begin {array} {rr} 1 & 0 0 & 0 end {array} rightB, leftB begin {array} {rr} 0 & 1 0 & 0 end {array} kananB, leftB begin {array} {rr} 0 & 0 1 & 0 end {array} kananB, kiriB mula {array} {rr} 0 & 0 0 & 1 end {array} kananB kanan } ),
    (D = {1 } ); ( vect {v} = leftB begin {array} {cc} a & b c & d end {array} kananB )

  6. (T: vectspace {M} _ {22} ke vectspace {M} _ {22} ),
    (T leftB begin {array} {cc} a & b c & d end {array} rightB = leftB begin {array} {cc} a & b + c b + c & d end {array} kananB );
    (B = D = {} )
    ( kiri { leftB begin {array} {rr} 1 & 0 0 & 0 end {array} rightB, leftB begin {array} {rr} 0 & 1 0 & 0 end {array} rightB, leftB begin {array} {rr} 0 & 0 1 & 0 end {array} rightB, leftB begin {array} {rr} 0 & 0 0 & 1 end {array} kananB kanan } ); ( vect {v} = leftB begin {array} {cc} a & b c & d end {array} kananB )

  1. ( leftB begin {array} {cc} 1 & 2 5 & 3 4 & 0 1 & 1 end {array} kananB );
    (C_D [T (a, b)] = leftB begin {array} {cc} 1 & 2 5 & 3 4 & 0 1 & 1 end {array} kananB kiriB mulakan {array} {cc} b a - b end {array} kananB = kiriB mula {array} {c} 2a - b 3a + 2b 4b a akhir {array} kananB )

  2. ( frac {1} {2} leftB begin {array} {rrr} 1 & 1 & -1 1 & 1 & 1 end {array} kananB ); (C_D [T (a + bx + cx ^ 2)] = frac {1} {2} leftB begin {array} {rrr} 1 & 1 & -1 1 & 1 & 1 end { array} rightB leftB begin {array} {c} a b c end {array} rightB = frac {1} {2} leftB begin {array} {c} a + b - c a + b + c end {array} kananB )

  3. ( leftB begin {array} {cccc} 1 & 0 & 0 & 0 0 & 1 & 1 & 0 0 & 1 & 1 & 0 0 & 0 & 0 & 1 end {array } kananB ); (C_D kiri (T leftB begin {array} {cc} a & b c & d end {array} kananB kanan) = kiriB mulai {array} {cccc} 1 & 0 & 0 & 0 0 & 1 & 1 & 0 0 & 1 & 1 & 0 0 & 0 & 0 & 1 end {array} kananB kiriB mulakan {array} {c} a b c d end {array} rightB = leftB begin {array} {c} a b + c b + c d end {array} kananB )

Dalam setiap kes, sahkan Teorema [thm: 028067]. Gunakan asas standard dalam ( RR ^ n ) dan ( {1, x, x ^ {2} } ) di ( vectspace {P} _ {2} ).

  1. ( RR ^ 3 stackrel {T} { to} RR ^ 2 stackrel {S} { to} RR ^ 4 ); (T (a, b, c) = (a + b, b - c) ), (S (a, b) = (a, b - 2a, 3b, a + b) )

  2. ( RR ^ 3 stackrel {T} { to} RR ^ 4 stackrel {S} { to} RR ^ 2 );
    (T (a, b, c) = (a + b, c + b, a + c, b - a) ),
    (S (a, b, c, d) = (a + b, c - d) )

  3. ( vectspace {P} _2 stackrel {T} { to} RR ^ 3 stackrel {S} { to} vectspace {P} _2 ); (T (a + bx + cx ^ 2) = (a, b - c, c - a) ), (S (a, b, c) = b + cx + (a - c) x ^ 2 )

  4. ( RR ^ 3 stackrel {T} { to} vectspace {P} _2 stackrel {S} { to} RR ^ 2 );
    (T (a, b, c) = (a - b) + (c - a) x + bx ^ 2 ),
    (S (a + bx + cx ^ 2) = (a - b, c) )

  1. (M_ {ED} (S) M_ {DB} (T) = {} )
    ( leftB begin {array} {rrrr} 1 & 1 & 0 & 0 0 & 0 & 1 & -1 end {array} kananB kiriB mula {array} {rrr} 1 & 1 & 0 0 & 1 & 1 1 & 0 & 1 -1 & 1 & 0 end {array} kananB = {} )
    ( leftB begin {array} {rrr} 1 & 2 & 1 2 & -1 & 1 end {array} rightB = M_ {EB} (ST) )

  2. (M_ {ED} (S) M_ {DB} (T) = {} )
    ( leftB begin {array} {rrr} 1 & -1 & 0 0 & 0 & 1 end {array} rightB leftB begin {array} {rrr} 1 & -1 & 0 -1 & 0 & 1 0 & 1 & 0 end {array} kananB = {} )
    ( leftB begin {array} {rrr} 2 & -1 & -1 0 & 1 & 0 end {array} rightB = M_ {EB} (ST) )

Sahkan Teorema [thm: 028067] untuk
( vectspace {M} _ {22} stackrel {T} { to} vectspace {M} _ {22} stackrel {S} { to} vectspace {P} _2 ) di mana (T (A) = A ^ {T} ) dan
(S leftB begin {array} {cc} a & b c & d end {array} rightB = b + (a + d) x + cx ^ 2 ). Gunakan pangkalan
(B = D = kiri { leftB begin {array} {rr} 1 & 0 0 & 0 end {array} rightB, leftB begin {array} {rr} 0 & 1 0 & 0 end {array} kananB, leftB begin {array} {rr} 0 & 0 1 & 0 end {array} kananB, kiriB mula {array} {rr} 0 & 0 0 & 1 end {array} kananB kanan } )
dan (E = {1, x, x ^ {2} } ).

Dalam setiap kes, cari (T ^ {- 1} ) dan sahkan bahawa ([M_ {DB} (T)] ^ {- 1} = M_ {BD} (T ^ {- 1}) ).

  1. (T: RR ^ 2 to RR ^ 2 ), (T (a, b) = (a + 2b, 2a + 5b) );
    (B = D = ) standard

  2. (T: RR ^ 3 to RR ^ 3 ), (T (a, b, c) = (b + c, a + c, a + b) ); (B = D = ) standard

  3. (T: vectspace {P} _2 to RR ^ 3 ), (T (a + bx + cx ^ 2) = (a - c, b, 2a - c) ); (B = {1, x, x ^ 2 } ), (D = ) standard

  4. (T: vectspace {P} _2 to RR ^ 3 ),
    (T (a + bx + cx ^ 2) = (a + b + c, b + c, c) );
    (B = {1, x, x ^ 2 } ), (D = ) standard

  1. (T ^ {- 1} (a, b, c) = frac {1} {2} (b + c - a, a + c - b, a + b - c) );
    (M_ {DB} (T) = leftB begin {array} {ccc} 0 & 1 & 1 1 & 0 & 1 1 & 1 & 0 end {array} kananB );
    (M_ {BD} (T ^ {- 1}) = frac {1} {2} leftB begin {array} {rrr} -1 & 1 & 1 1 & -1 & 1 1 & 1 & -1 end {array} kananB )

  2. (T ^ {- 1} (a, b, c) = (a - b) + (b - c) x + cx ^ 2 );
    (M_ {DB} (T) = leftB begin {array} {ccc} 1 & 1 & 1 0 & 1 & 1 0 & 0 & 1 end {array} kananB );
    (M_ {BD} (T ^ {- 1}) = leftB begin {array} {rrr} 1 & -1 & 0 0 & 1 & -1 0 & 0 & 1 end {array } kananB )

Dalam setiap kes, tunjukkan bahawa (M_ {DB} (T) ) tidak boleh dibalikkan dan gunakan fakta bahawa (M_ {BD} (T ^ {- 1}) = [M_ {BD} (T)] ^ { -1} ) untuk menentukan tindakan (T ^ {- 1} ).

  1. (T: vectspace {P} _2 to RR ^ 3 ), (T (a + bx + cx ^ 2) = (a + c, c, b - c) ); (B = {1, x, x ^ 2 } ), (D = ) standard

  2. (T: vectspace {M} _ {22} hingga RR ^ 4 ),
    (T leftB begin {array} {cc} a & b c & d end {array} rightB = (a + b + c, b + c, c, d) );
    (B = kiri { leftB begin {array} {rr} 1 & 0 0 & 0 end {array} rightB, leftB begin {array} {rr} 0 & 1 0 & 0 end {array} kananB, leftB begin {array} {rr} 0 & 0 1 & 0 end {array} kananB, kiriB mula {array} {rr} 0 & 0 0 & 1 end {array} rightB right } ), (D = ) standard

  1. Oleh itu (C_B [T ^ {- 1} (a, b, c, d)] = {} )
    (M_ {BD} (T ^ {- 1}) C_D (a, b, c, d) = {} )
    ( leftB begin {array} {rrrr} 1 & -1 & 0 & 0 0 & 1 & -1 & 0 0 & 0 & 1 & 0 0 & 0 & 0 & 1 end {array} rightB leftB begin {array} {c} a b c d end {array} rightB = leftB begin {array} {c} a - b b - c c d end {array} rightB ), jadi (T ^ {- 1} (a, b, c, d) = leftB start {array} {cc} a -b & b - c c & d end {array} kananB ).

Biarkan (D: vectspace {P} _ {3} to vectspace {P} _ {2} ) menjadi peta pembezaan yang diberikan oleh (D [p (x)] = p ^ prime (x) ). Cari matriks (D ) yang sesuai dengan asas (B = {1, x, x ^ {2}, x ^ {3} } ) dan
(E = {1, x, x ^ {2} } ), dan gunakan untuk mengira
(D (a + bx + cx ^ {2} + dx ^ {3}) ).

Gunakan Teorema [thm: 028086] untuk menunjukkannya
(T: V to V ) bukan isomorfisme jika ( func {ker} T neq 0 ) (andaikan ( func {dim} V = n )). [Petunjuk: Pilih sebarang asas yang dipesan (B ) yang mengandungi vektor di ( func {ker} T ).]

Biarkan (T: V to RR ) menjadi transformasi linear, dan biarkan (D = {1 } ) menjadi asas ( RR ). Dengan adanya sebarang susunan (B = { vect {e} _ {1}, dots, vect {e} _ {n} } ) dari (V ), tunjukkan bahawa
(M_ {DB} (T) = [T ( vect {e} _ {1}) cdots T ( vect {e} _ {n})] ).

Biarkan (T: V to W ) menjadi isomorfisme, biarkan (B = { vect {e} _ {1}, dots, vect {e} _ {n} } ) menjadi asas tertib (V ), dan biarkan (D = {T ( vect {e} _ {1}), titik, T ( vect {e} _ {n}) } ). Tunjukkan bahawa (M_ {DB} (T) = I_ {n} ) - matriks identiti (n kali n ).

Mempunyai (C_ {D} [T ( vect {e} _ {j})] ) = lajur (j ) dari (I_ {n} ). Oleh itu (M_ {DB} (T) = leftB begin {array} {cccc} C_ {D} [T ( vect {e} _ {1})] & C_ {D} [T ( vect { e} _ {2})] & cdots & C_ {D} [T ( vect {e} _ {n})] end {array} rightB = I_ {n} ).

[ex: ex9_1_13] Lengkapkan bukti Teorem [thm: 027908].

[ex: ex9_1_14] Biarkan (U ) menjadi matriks (n kali n ) yang boleh terbalik, dan biarkan (D = { vect {f} _ {1}, vect {f} _ {2 }, titik, vect {f} _ {n} } ) di mana ( vect {f} _ {j} ) ialah lajur (j ) dari (U ). Tunjukkan bahawa (M_ {BD} (1 _ { RR ^ n}) = U ) ketika (B ) adalah asas piawai ( RR ^ n ).

Biarkan (B ) menjadi asas teratur dari (n ) - ruang dimensi (V ) dan biarkan (C_ {B}: V to RR ^ n ) menjadi transformasi koordinat. Sekiranya (D ) adalah asas piawai ( RR ^ n ), tunjukkan bahawa (M_ {DB} (C_ {B}) = I_ {n} ).

Mari (T: vectspace {P} _ {2} to RR ^ 3 ) ditentukan oleh
(T (p) = (p (0), p (1), p (2)) ) untuk semua (p ) di ( vectspace {P} _ {2} ). Biarkan
(B = {1, x, x ^ {2} } ) dan (D = {(1, 0, 0), (0, 1, 0), (0, 0, 1) } ).

  1. Tunjukkan bahawa (M_ {DB} (T) = leftB begin {array} {ccc} 1 & 0 & 0 1 & 1 & 1 1 & 2 & 4 end {array} kananB ) dan membuat kesimpulan bahawa (T ) adalah isomorfisme.

  2. Umumkan ke (T: vectspace {P} _ {n} to RR ^ {n + 1} ) di mana
    (T (p) = (p (a_ {0}), p (a_ {1}), titik, p (a_ {n})) ) dan (a_ {0}, a_ {1}, titik, a_ {n} ) adalah nombor nyata yang berbeza.

  1. Sekiranya (D ) adalah asas piawai ( RR ^ {n + 1} ) dan (B = {1, x, x ^ {2}, dots, x ^ {n} } ), kemudian (M_ {DB} (T) = {} )
    ( leftB begin {array} {cccc} C_D [T (1)] & C_D [T (x)] & cdots & C_D [T (x ^ n)] end {array} kananB = kiriB start {array} {ccccc} 1 & a_0 & a_0 ^ 2 & cdots & a_0 ^ n 1 & a_1 & a_1 ^ 2 & cdots & a_1 ^ n 1 & a_2 & a_2 ^ 2 & cdots & a_2 ^ n vdots & vdots & vdots & & vdots 1 & a_n & a_n ^ 2 & cdots & a_n ^ n end {array} kananB ).

    Matriks ini mempunyai penentu bukan sifar oleh Teorema [thm: 008552] (kerana (a_ {i} ) berbeza), jadi (T ) adalah isomorfisme.

Mari (T: vectspace {P} _ {n} to vectspace {P} _ {n} ) ditakrifkan oleh (T [p (x)] = p (x) + xp ^ prime ( x) ), di mana (p ^ prime (x) ) menunjukkan terbitan. Tunjukkan bahawa (T ) adalah isomorfisme dengan mencari (M_ {BB} (T) ) ketika (B = {1, x, x ^ {2}, dots, x ^ {n} } ).

Sekiranya (k ) adalah nombor apa pun, tentukan
(T_ {k}: vectspace {M} _ {22} to vectspace {M} _ {22} ) oleh (T_ {k} (A) = A + kA ^ {T} ).

  1. Sekiranya (B = )
    ( kiri { leftB begin {array} {rr} 1 & 0 0 & 0 end {array} rightB, leftB begin {array} {rr} 0 & 0 0 & 1 end {array} rightB, leftB begin {array} {rr} 0 & 1 1 & 0 end {array} rightB, leftB begin {array} {rr} 0 & 1 - 1 & 0 end {array} rightB right } ) cari (M_ {BB} (T_k) ), dan simpulkan bahawa (T_k ) tidak dapat diubah jika (k neq 1 ) dan (k neq -1 ).

  2. Ulangi untuk (T_ {k}: vectspace {M} _ {33} ke vectspace {M} _ {33} ). Bolehkah anda membuat generalisasi?

Latihan yang selebihnya memerlukan definisi berikut. Sekiranya (V ) dan (W ) adalah ruang vektor, kumpulan semua transformasi linier dari (V ) ke (W ) akan dilambangkan oleh [ vectspace {L} (V, W) = {T mid T: V to W mbox {adalah transformasi linear} } ] Diberi (S ) dan (T ) di ( vectspace {L} (V, W) ) dan (a ) di ( RR ), tentukan (S + T: V to W ) dan (aT: V to W ) oleh [ start {aligned} (S + T) ( vect {v}) & = S ( vect {v}) + T ( vect {v}) & mbox {untuk semua} vect {v} mbox {in} V (aT ) ( vect {v}) & = aT ( vect {v}) & mbox {untuk semua} vect {v} mbox {in} V end {aligned} ]

[ex: ex9_1_19] Tunjukkan bahawa ( vectspace {L} (V, W) ) adalah ruang vektor.

[ex: ex9_1_20] Tunjukkan bahawa sifat-sifat berikut berlaku dengan syarat transformasi bertaut sedemikian rupa sehingga semua operasi ditentukan.

  1. (R (ST) = (RS) T )

  2. (1_ {W} T = T = T1_ {V} )

  3. (R (S + T) = RS + RT )

  4. ((S + T) R = SR + TR )

  5. ((aS) T = a (ST) = S (aT) )

  1. ([(S + T) R] ( vect {v}) = (S + T) (R ( vect {v})) = S [(R ( vect {v}))] + T [ (R ( vect {v}))] = SR ( vect {v}) + TR ( vect {v}) = [SR + TR] ( vect {v}) ) tahan untuk semua ( vect {v} ) di (V ). Oleh itu ((S + T) R = SR + TR ).

Diberi (S ) dan (T ) di ( vectspace {L} (V, W) ), tunjukkan bahawa:

  1. ( func {ker} S cap func {ker} T subseteq func {ker} (S + T) )

  2. ( func {im} (S + T) subseteq func {im} S + func {im} T )

  1. Sekiranya ( vect {w} ) terletak di ( func {im} (S + T) ), maka ( vect {w} = (S + T) ( vect {v}) ) untuk beberapa ( vect {v} ) di (V ). Tetapi kemudian ( vect {w} = S ( vect {v}) + T ( vect {v}) ), jadi ( vect {w} ) terletak di ( func {im} S + func {im} T ).

Biarkan (V ) dan (W ) menjadi ruang vektor. Sekiranya (X ) adalah subset dari (V ), tentukan [X ^ {0} = {T mbox {in} vectspace {L} (V, W) pertengahan T ( vect { v}) = 0 mbox {untuk semua} vect {v} mbox {in} X } ]

  1. Tunjukkan bahawa (X ^ {0} ) adalah ruang bawah ( vectspace {L} (V, W) ).

  2. Sekiranya (X subseteq X_ {1} ), tunjukkan bahawa (X_1 ^ 0 subseteq X ^ 0 ).

  3. Sekiranya (U ) dan (U_ {1} ) adalah ruang bawah (V ), tunjukkan bahawa
    ((U + U_1) ^ 0 = U ^ 0 cap U_1 ^ 0 ).

  1. Sekiranya (X subseteq X_ {1} ), biarkan (T ) berbaring di (X_1 ^ 0 ). Kemudian (T ( vect {v}) = vect {0} ) untuk semua ( vect {v} ) di (X_ {1} ), dari mana (T ( vect {v} ) = vect {0} ) untuk semua ( vect {v} ) di (X ). Oleh itu (T ) berada di (X ^ {0} ) dan kami telah menunjukkan bahawa (X_1 ^ 0 subseteq X ^ {0} ).

Tentukan (R: vectspace {M} _ {mn} ke vectspace {L} ( RR ^ n, RR ^ m) ) oleh (R (A) = T_ {A} ) untuk setiap (m times n ) matriks (A ), di mana (T_ {A}: RR ^ n to RR ^ m ) diberikan oleh (T_ {A} ( vect {x} ) = A vect {x} ) untuk semua ( vect {x} ) di ( RR ^ n ). Tunjukkan bahawa (R ) adalah isomorfisme.

Let (V) be any vector space (we do not assume it is finite dimensional). Given (vect{v}) in (V), define (S_{vect{v}} : RR o V) by (S_{vect{v}}(r) = rvect{v}) for all (r) in (RR).

  1. Show that (S_{vect{v}}) lies in (vectspace{L}(RR, V)) for each (vect{v}) in (V).

  2. Show that the map (R : V o vectspace{L}(RR, V)) given by (R(vect{v}) = S_{vect{v}}) is an isomorphism. [Petunjuk: To show that (R) is onto, if (T) lies in (vectspace{L}(RR, V)), show that (T = S_{vect{v}}) where (vect{v} = T(1)).]

  1. (R) is linear means (S_{vect{v}+vect{w}} = S_{vect{v}} + S_{vect{w}}) and (S_{avect{v}} = aS_{vect{v}}). These are proved as follows: (S_{vect{v}+vect{w}}(r) = r(vect{v} + vect{w}) = rvect{v} + rvect{w} = Svect{v}(r) + Svect{w}(r) = (Svect{v} + Svect{w})(r)), and (S_{avect{v}}(r) = r(avect{v}) = a(rvect{v}) = (aS_{vect{v}})(r)) for all (r) in (RR). To show (R) is one-to-one, let (R(vect{v}) = vect{0}). This means (S_{vect{v}} = 0) so (0 = S_{vect{v}}(r) = rvect{v}) for all (r). Hence (vect{v} = vect{0}) (take (r = 1)). Finally, to show (R) is onto, let (T) lie in (vectspace{L}(RR, V)). We must find (vect{v}) such that (R(vect{v}) = T), that is (S_{vect{v}} = T). In fact, (vect{v} = T(1)) works since then (T(r) = T(r dotprod 1) = rT(1) = rvect{v} = S_{vect{v}}(r)) holds for all (r), so (T = S_{vect{v}}).

Let (V) be a vector space with ordered basis (B = {vect{b}_{1}, vect{b}_{2}, dots, vect{b}_{n}}). For each (i = 1, 2, dots, m), define (S_{i} : RR o V) by (S_{i}(r) = rvect{b}_{i}) for all (r) in (RR).

  1. Show that each (S_{i}) lies in (vectspace{L}(RR, V)) and (S_{i}(1) = vect{b}_{i}).

  2. Given (T) in (vectspace{L}(RR, V)), let
    (T(1) = a_{1}vect{b}_{1} + a_{2}vect{b}_{2} + cdots + a_{n}vect{b}_{n}), (a_{i}) in (RR). Show that (T = a_{1}S_{1} + a_{2}S_{2} + cdots + a_{n}S_{n}).

  3. Show that ({S_{1}, S_{2}, dots, S_{n}}) is a basis of (vectspace{L}(RR, V)).

  1. Given (T : RR o V), let (T(1) = a_{1}vect{b}_{1} + cdots + a_{n}vect{b}_{n}), (a_{i}) in (RR). For all (r) in (RR), we have ((a_{1}S_{1} + cdots + a_{n}S_{n})(r) = a_{1}S_{1}(r) + cdots + a_{n}S_{n}(r) = (a_{1}rvect{b}_{1} + cdots + a_{n}rvect{b}_{n}) = rT(1) = T(r)). This shows that (a_{1}S_{1} + cdots + a_{n}S_{n} = T).

[ex:9_1_26] Let (func{dim }V = n), (func{dim }W = m), and let (B) and (D) be ordered bases of (V) and (W), respectively. Show that (M_{DB} : vectspace{L}(V, W) o vectspace{M}_{mn}) is an isomorphism of vector spaces. [Petunjuk: Let (B = {vect{b}_{1}, dots, vect{b}_{n}}) and (D = {vect{d}_{1}, dots, vect{d}_{m}}). Given (A = leftB a_{ij} ightB) in (vectspace{M}_{mn}), show that (A = M_{DB}(T)) where (T : V o W) is defined by
(T(vect{b}_{j}) = a_{1j}vect{d}_{1} + a_{2j}vect{d}_{2} + cdots + a_{mj}vect{d}_{m}) for each (j).]

If (V) is a vector space, the space (V^{*} = vectspace{L}(V, RR)) is called the dual of (V). Given a basis (B = {vect{b}_{1}, vect{b}_{2}, dots, vect{b}_{n}}) of (V), let (E_{i} : V o RR) for each (i = 1, 2, dots, n) be the linear transformation satisfying [E_i(vect{b}_j) = left{ egin{array}{ll} 0 & mbox{ if } i eq j 1 & mbox{ if } i = j end{array} ight.] (each (E_{i}) exists by Theorem [thm:020916]). Prove the following:

  1. (E_{i}(r_{1}vect{b}_{1} + cdots + r_{n}vect{b}_{n}) = r_{i}) for each (i = 1, 2, dots, n)

  2. (vect{v} = E_{1}(vect{v})vect{b}_{1} + E_{2}(vect{v})vect{b}_{2} + cdots + E_{n}(vect{v})vect{b}_{n}) for all (vect{v}) in (V)

  3. (T = T(vect{b}_{1})E_{1} + T(vect{b}_{2})E_{2} + cdots + T(vect{b}_{n})E_{n}) for all (T) in (V^{*})

  4. Given (vect{v}) in (V), define (vect{v}^{*} : V o RR) by
    (vect{v}^{*}(vect{w}) = E_{1}(vect{v})E_{1}(vect{w}) + E_{2}(vect{v})E_{2}(vect{w}) + cdots + E_{n}(vect{v})E_{n}(vect{w})) for all (vect{w}) in (V). Show that:
  5. (vect{v}^{*} : V o RR) is linear, so (vect{v}^{*}) lies in (V^{*}).

  6. (vect{b}_i^{*} = E_{i}) for each (i = 1, 2, dots, n).

  7. The map (R : V o V^{*}) with (R(vect{v}) = vect{v}^{*}) is an isomorphism. [Petunjuk: Show that (R) is linear and one-to-one and use Theorem [thm:022192]. Alternatively, show that (R^{-1}(T) = T(vect{b}_{1})vect{b}_{1} + cdots + T(vect{b}_{n})vect{b}_{n}).]

  1. Write (vect{v} = v_{1}vect{b}_{1} + cdots + v_{n}vect{b}_{n}), (v_{j}) in (RR). Apply (E_{i}) to get (E_{i}(vect{v}) = v_{1}E_{i}(vect{b}_{1}) + cdots + v_{n}E_{i}(vect{b}_{n}) = v_{i}) by the definition of the (E_{i}).


Chapter 9 : Linear transformation

A function from a set $X$ to a set $Y$ is rule telling how elements of both sets are associated each other.

The element $y in Y$ associated, under the function, to the element $x in X$ is the so-called imej.
The element $x in X$ associated, under the function, to the element $y in Y$ is the so-called pre-image.

Example I

The function represented in Figure 9.1 associates the grades of the students belonging to the class D1 in respect to the last examination of statistics. We have :

  • $X= < ext, ext, ext, ext, ext, ext>$. $Y=<1,2,3,4,5,6>$.
  • The image of $ ext$ under the function is $4$.
  • The pre-image of $5$ under the function is $< ext, ext>$.
  • $ ext$ does not have any image (absent at the examination).
  • $1$ and $2$ both do not have any pre-image.


1 Jawapan 1

Remember that $T$ maps polynomials to polynomials. A polynomial is a special type of function, and as such, we can substitute particular values into the given polynomial. In this case, the map $T$ takes a polynomial function, substitutes $t = 4$ into the polynomial to get a constant, and then turns that constant into the constant function.

For example, to compute $T(2t^2 - 1)$ we substitute $t = 4$ into the polynomial $2t^2 - 1$ to obtain the constant $2(4)^2 - 1 = 31$ . So, $T(2t^2 - 1) = 0t^2 + 0t + 31.$

To form the matrix, you need to compute $T(1), T(t),$ and $T(t^2)$ , i.e. find the image of the basis under $T$ . Then, you need to compute the coordinate column vectors for the resulting polynomials under the given basis.

Let's get you started. If we evaluate $T(1)$ , we substitute $t = 4$ into the constant polynomial $1$ to get $1$ (the constant polynomial takes the value of $1$ at any $t$ , including $t = 4$ ). So, $T(1) = 1 cdot 1 + 0 cdot t + 0 cdot t^2.$ Therefore, the coordinate column vector with respect to the basis $(1, t, t^2)$ is $egin 1 0 0 end.$ This forms the first column of your matrix. Do the same with $t$ and $t^2$ , and you'll get the other two columns.


Existence and Uniqueness¶

Notice that some of these transformations map multiple inputs to the same output, and some are incapable of generating certain outputs.

Sebagai contoh, projections above can send multiple different points to the same point.

We need some terminology to understand these properties of linear transformations.

Definition. A mapping (T: mathbb^n ightarrow mathbb^m) is said to be onto (mathbb^m) if each (mathbf) in (mathbb^m) is the image of at least one (mathbf) in (mathbb^n) .

Informally, (T) is onto if every element of its codomain is in its range.

Another (important) way of thinking about this is that (T) is onto if there is a solution (mathbf) of

for all possible (mathbf.)

This is asking an existence question about a solution of the equation (T(mathbf) = mathbf) for all (mathbf.)

Here, we see that (T) maps points in (mathbb^2) to a plane lying within (mathbb^3) .

That is, the range of (T) is a strict subset of the codomain of (T) .

So (T) is not onto (mathbb^3) .

In this case, for every point in (mathbb^2) , there is an (mathbf) that maps to that point.

So, the range of (T) is equal to the codomain of (T) .

So (T) is onto (mathbb^2) .

Here, the red points are the images of the blue points.

What about this transformation? Is it onto (mathbb^2) ?

Here again the red points (which all lie on the (x) -axis) are the images of the blue points.

What about this transformation? Is it onto (mathbb^2) ?

Definition. A mapping (T: mathbb^n ightarrow mathbb^m) is said to be one-to-one if each (mathbf) in (mathbb^m) is the image of at most one (mathbf) in (mathbb^n) .

If (T) is one-to-one, then for each (mathbf,) the equation (T(mathbf) = mathbf) has either a unique solution, or none at all.

This is asking an existence question about a solution of the equation (T(mathbf) = mathbf) for all (mathbf) .

Let’s examine the relationship between these ideas and some previous definitions.

If (Amathbf = mathbf) is consistent for all (mathbf) , is (T(mathbf) = Amathbf) onto? one-to-one?

(T(mathbf)) is onto. (T(mathbf)) may or may not be one-to-one. If the system has multiple solutions for some (mathbf) , (T(mathbf)) is not one-to-one.

If (Amathbf = mathbf) is consistent and has a unique solution for all (mathbf) , is (T(mathbf) = Amathbf) onto? one-to-one?

If (Amathbf = mathbf) is not consistent for all (mathbf) , is (T(mathbf) = Amathbf) onto? one-to-one?

(T(mathbf)) is tidak onto. (T(mathbf)) may or may not be one-to-one.

If (T(mathbf) = Amathbf) is onto, is (Amathbf = mathbf) consistent for all (mathbf) ? is the solution unique for all (mathbf) ?

If (T(mathbf) = Amathbf) is one-to-one, is (Amathbf = mathbf) consistent for all (mathbf) ? is the solution unique for all (mathbf) ?


Chapter 9: Linear Mappings

This chapter is about linear mappings . A mapping is simply a function that takes a vector in and outputs another vector. A linear mapping is a special kind of function that is very useful since it is simple and yet powerful.

Example 9.1: Image Compresssion
Linear mappings are common in real world engineering problems. Salah satu contohnya ialah dalam imej atau video compression. Here an image to be coded is broken down to blocks, such as the $4 imes 4$ pixel blocks as shown in Figure 9.1.

A real encoder is more complicated than this picture, and contain many optimizations. For instance, the linear mapping is not implemented using a matrix multiplication, but in a faster way that is mathematically equivalent to it.

Definition 9.1: Mapping
A mapping $F$ is a rule that, for every item in one set $N$, provides one item in another set $M$
bermula F: N ightarrow M. end (9.1)
This may look abstract, but in fact you have already been dealing with mappings , but under the name fungsi. Another way to state the same thing is
bermula y = F(x). akhir (9.2)
The form
bermula F: x ightarrow y, x in N. end (9.3)
is also used. For example the function $y = x^2$, shown in Figure 9.3, is a rule that, for every item in the set of real numbers $mathbb$, provides another item from the set of real numbers $mathbb$. Thus, in this example, both $N$ and $M$ equals $mathbb$. Definition 9.2: Domain, Codomain, and Range of a Mapping
Assume we have a mapping $y = F(x)$ where $x in N$ and $y in M$. Then $N$ is the domain of the mapping , and $M$ is the codomain of the mapping . The julat (or alternatively, the imej) of the mapping is the set $V_F$, where
bermula V_F = . akhir (9.4)

The vertical bar should be read as "such that" or "with the property of". In this example, the expression can be read out as "$V_F$ is the set of all elements $F(x)$ such that $x$ belongs to the set $N$". For the example $y=x^2$, the range equals the set of positive real numbers including zero, $V_F = mathbb_$. Therefore, in this case, we only reach a subset of the codomain , i.e., $V_F$ is a subset of $M$.

In linear algebra, the inputs and outputs of a function are vectors instead of scalar. Assume we have a coordinate system $vc_1,vc_2$ and that $eginx_1 x_2 end$ is the coordinate for the vector $vc$. We can now have a function $vc = F( vc )$ that maps every $vc$ to a new vector $vc = eginy_1 y_2 end$ according to, for instance,

bermula bermula y_1 = x_1 y_2 = 0 end akhir (9.5)
It is not possible to draw a simple graph for this mapping , since four dimensions would be needed for that (two for the input and two for the output). However, it is often possible to get an intuitive understanding of the mapping by drawing both the input and the output in the same diagram. Interactive Illustration 9.5 shows this for the mapping mentioned above. Note that you can move the red input arrow $vc$ and see how the blue output arrow $vc$ moves.

As can be seen, the effect of the mapping is to project any input vector on to the $vc_1$-axis. Any vector in the plane can be used as input, hence the domain is $mathbb^2$. The codomain is also $mathbb^2$, since the output is a vector of two dimensions, but the range or image is the $vc_1$-axis. The range is marked with green in the second step of the figure.

A slightly more interesting example of a mapping is the following

bermula bermula y_1 = cos(frac<3>) x_1 - sin(frac<3>) x_2, y_2 = sin(frac<3>) x_1 + cos(frac<3>) x_2. akhir akhir (9.6)
As can be seen, the factors before the $x_1$s and $x_2$s resemble a rotation matrix (see Definition 6.10) by $pi/3$ radians. This mapping is illustrated in the Interactive Illustration 9.6, where again the input vector is marked with red and the output vector is marked with blue.

As can be seen by playing around with Interactive Ilustration 9.6, the output vector is a rotated copy of the input vector with the rotation angle $frac<3>$. As a matter of fact, we can write Equation (9.6) in matrix form as

bermula bermula y_1 y_2 end = left(egin cos frac <3>& -sin frac <3> sin frac <3>& cos frac <3>end ight) egin x_1 x_2 end akhir (9.7)
or, shorter,
bermula vc = mx vc. akhir (9.8)
It is now easy to see that the matrix $mx$ is just a two-dimensional rotation matrix as defined in Definition 6.10 in Chapter 6. When a mapping can be written in matrix form, i.e., in the form $vc = mx vc$, we call $mx$ the transformation matrix.

The example in Interactive Illustration 9.3 can also be written in matrix form,

bermula bermula y_1 y_2 end = left(egin 1 & 0 0 & 0 end ight) egin x_1 x_2 end, end (9.9)
where the transformation matrix in this case equals $left(egin 1 & 0 0 & 0 end ight)$. That raises the question whether all vector mappings be written on the form $vc = mx vc$ for some $mx$ with constant coefficients? The answer is no. As an example, the mapping
bermula bermula y_1 = x_1 x_2 + x_2 y_2 = x_1 + e^ akhir akhir (9.10)
cannot be written as $vc = mxvc$. It is of course possible to write $egin y_1 y_2 end = left(egin x_2 & 1 1 & frac<>> akhir ight) egin x_1 x_2 end$, but that violates the rule that $mx$ should consist of constant coefficients, i.e, independent of $vc$. To investigate which mappings can be written in this form, we first introduce the concept of a linear mapping .

Definition 9.3: Linear Mapping
A linear mapping is a mapping $F$, which satisfies

bermula bermula F( vc' + vc'') = F(vc') + F(vc''), F( lambda vc ) = lambda F(vc). end akhir (9.11)

Example 9.2: Shopping Cart to Cost
Assume that a shop only sells packages of penne, jars of Arrabiata sauce, and bars of chocolate. The contents of your shopping cart can be modelled as a vector space. Introduce addition of two shopping carts as putting all of the items of both carts in one cart. Introduce multiplication of a scalar as multiplying the number of items in a shopping cart with that scalar. Notice that here, there are practical problems with multiplying a shopping cart with non-integer numbers or negative numbers, which makes the model less useful in practice. Introduce a set of basis shopping carts. Let $vc_1$ correspond to the shopping cart containing one package of penne, let $vc_2$ correspond to the shopping cart containing one jar of Arrabiata sauce, and let $vc_3$ correspond to the shopping cart containing one bar of chocolate. Then each shopping cart $vc$ can be described by three coordinates $(x_1, x_2, x_3)$ such that $vc = x_1 vc_1 + x_2 vc_2 + x_3 vc_3$.

In real life this map is often non-linear , e.g., a shop might have campaigns saying 'buy 3 for the price of 2'. But modelling the mapping as a linear map is often a reasonable and useful model. Again (as is common with mathematical modelling) there is a discrepancy between mathematical model and reality. The results of mathematical analysis must always be used with reason and critical thinking. Even if the cost of a shopping cart of 1 package of penne is 10, it does not always mean that you can sell packages of penne to the store for 10 each.

Assume we have a basis $vc_1$,$vc_2$ in $N$ and $M$. We can then write the input $vc$ and the output $vc$ in this basis ,

bermula vc & = x_1 vc_1 + x_2 vc_2, vc & = y_1 vc_1 + y_2 vc_2. end (9.12)
Inserting the expression for $vc$ in $vc = F(x)$, we get
bermula vc = F(vc) = F(x_1 vc_1 + x_2 vc_2), end (9.13)
and since $F$ is linear , we can apply the first and second conditions of linearity,
bermula vc = F(x_1 vc_1) + F(x_2 vc_2) = x_1F(vc_1) + x_2F(vc_2). akhir (9.14)
Since $F$ maps one vector to another vector, $F(vc_1)$ must also be a vector that can be expressed in the basis . Assume it has the coordinates $egina_ <11> a_ <21>end$ in the base $vc_1$, $vc_2$,
bermula F(vc_1) = a_<11>vc_1 + a_<21>vc_2. akhir (9.15)
Likewise, we assume
bermula F(vc_2) = a_<12>vc_1 + a_<22>vc_2. akhir (9.16)
We can now continue the expansion of $F(vc)$ as
bermula vc = x_1(a_<11>vc_1 + a_<21>vc_2) + x_2(a_<12>vc_1 + a_<22>vc_2) = (x_1 a_ <11>+ x_2 a_<12>)vc_1 + (x_1 a_ <21>+ x_2 a_<22>)vc_2 end (9.17)
Comparing this expression to the second row of Equation (9.12), we understand that $y_1$ must equal $a_<11>x_1 + a_<12>x_2$ and $y_2 = a_<21>x_1 + a_<22>x_2$. Kami ada
bermula bermula y_1 y_2 end = left(egin a_ <11>& a_ <12> a_ <21>& a_ <22>end ight) egin x_1 x_2 end akhir (9.18)
itu dia
bermula vc = mxvc. akhir (9.19)
Now we need to prove the converse, that if $vc = mxvc$, then the mapping is linear . Assume we have one input $vc'$ with coordinates $vc' = x_1' vc_1 + x_2' vc_2$ or in vector form $vc' = eginx'_1x'_2end$, and another input $vc''$ or in vector form $vc'' = eginx''_1x''_2end$. The first condition follows directly from rule $(vii)$ of matrix arithmetic properties in Theorem 6.1, that is,
bermula F(vc' + vc'') = mx(vc' + vc'') = mxvc' + mxvc'' = F(vc') + F(vc''). akhir (9.20)
The second condition also follows from matrix algebra
bermula F(lambda vc) = mx(lambda vc') = lambda mx vc' = lambda F(vc) end (9.21)
since a scalar $lambda$ can be placed on either side of a matrix ($mxlambda = lambda mx$). The proof is thus complete.

We will prove the case when $N = M = 3$, but the proof for other values of $M$ and $N$ is similar.

The first basis vector $vc_1$ can be written as $vc_1 = 1 vc_1 + 0vc_2 + 0 vc_3$ and thus has the coordinates $(1, 0, 0)$. Using $vc = egin 1 0 0end$ in the formula $vc = mxvc$ gives

bermula bermula y_1 y_2 y_3end = left(egin a_ <11>& a_ <12>& a_<13> a_ <21>& a_ <22>& a_<23> a_ <31>& a_ <32>& a_<33> end ight) egin 1 0 0 end = left(egin 1 a_ <11>+ 0 a_ <12>+ 0 a_<13> 1 a_ <21>+ 0 a_ <22>+ 0 a_<23> 1 a_ <31>+ 0 a_ <32>+ 0 a_<33> end ight) = egin a_ <11> a_ <21> a_ <31>end, end (9.22)
which is the first column in $mx$. Thus the image $F(vc_1)$ of the basis vector $vc_1$ is the first column of $mx$, denoted $vc_<,1>$. Likewise, the second basis vector can be written $vc_2 = 0 vc_1 + 1 vc_2 + 0 vc_3$, and thus has the coordinates $(0, 1, 0)$. Its image is therefore
bermula left(egin a_ <11>& a_ <12>& a_<13> a_ <21>& a_ <22>& a_<23> a_ <31>& a_ <32>& a_<33> end ight) egin 0 1 0 end = left(egin 0 a_ <11>+ 1 a_ <12>+ 0 a_<13> 0 a_ <21>+ 1 a_ <22>+ 0 a_<23> 0 a_ <31>+ 1 a_ <32>+ 0 a_<33> end ight) = egin a_ <12> a_ <22> a_ <32>end, end (9.23)
which is the second column vector $vc_<,2>$ of the matrix $mx$. Similarly for the third basis vector we get
bermula left(egin a_ <11>& a_ <12>& a_<13> a_ <21>& a_ <22>& a_<23> a_ <31>& a_ <32>& a_<33> end ight) egin 0 0 1 end = left(egin 0 a_ <11>+ 0 a_ <12>+ 1 a_<13> 0 a_ <21>+ 0 a_ <22>+ 1 a_<23> 0 a_ <31>+ 0 a_ <32>+ 1 a_<33> end ight) = egin a_ <13> a_ <23> a_ <33>end, end (9.24)
which is the third column of $mx$. This can be extended to any numbers of $M$ and $N$.

Example 9.3: Finding a Linear Mapping's Matrix
A linear mapping $vc = F(vc)$ rotates a two-dimensional vector $vc$ counterclockwise 90 degrees. Find the transformation matrix $mx$ of the matrix form $vc = mx vc$ when the standard orthonormal basis $vc_1=(1,0)$, $vc_2=(0,1)$ is used.


Linear Transformation

Linear transformation (linear map, linear mapping or linear function) is a mapping V →W between two vector spaces, that preserves addition dan scalar multiplication.
— Wikipedia (Linear map)

Formally, for vector spaces V, W over the same field K, the function f: VW ialah linear map if any two vectors u,v ∈ V and any scalar c ∈ K satisfies the two following conditions:

(1): f(awak+v)=f(awak)+f(v)
(2): f(cawak)=cf(awak)


Combining transformations

The process of combining transformations is known as composition. Two or more linear transformations can be combined with relative ease using matrix multiplication. For example, let's assume we have two matrices, A dan B , that represent two different linear transformations. Assuming that we have a position vector matrix X1 , We can apply these transformations one after the other (first A , kemudian B ), as follows:

The same end result can be achieved by applying the transformation that is created by multiplying matrices A dan B bersama. Note, however, that the order in which the matrices must be multiplied is the opposite of the order in which they should be applied. Thus, in order to achieve the same end result as we did previously we would have:

Consider the following triangle:

Segi tiga ABC telah xy coordinates: ( 3, 5 ), ( 4, 1 ), ( 2, 1 )

Supposing we want to rotate the triangle clockwise through ninety degrees, and then reflect it in the y -axis. The two transformations matrices would be:

cos (90°)sin (90°) = 01 Rotation by 90° in a clockwise direction
-sin (90°)cos (90°)-10
-10 Reflection in the y -axis
01

Applying these transformations separately, we get:

01 342 = 511
-10511-3-4-2
-10 511 = -5-1-1
01-3-4-2-3-4-2

Here is the first transformation:

Segi tiga ABC is rotated ninety degrees to become triangle A'B'C'

Here is the second transformation:

Segi tiga ABC is reflected in the y -axis to become triangle A'B'C'

We could create a transformation matrix that combines these operations by multiplying the two individual transformation matrices together as follows:

Note that we multiply the matrices in the opposite order to that in which we want them to be applied. If we now multiply the resulting transformation matrix by the position vector matrix of our original triangle we get:

If you refer back to the results we got when we carried out the rotation and reflection transformations separately, you will see that the final x dan y coordinates for each point are identical.


9.1: The Matrix of a Linear Transformation

3.2.3 Affine Transformation of the Euclidean Plane Printout
A mathematician, like a painter or a poet, is a maker of patterns. If his patterns are more permanent than theirs, it is because they are made with ideas.
Godfrey Harold Hardy (1877–1947)

What is the form of a transformation matrix for the analytic model of the Euclidean plane? We investigate this question. Let A = [ a ij ] be a transformation matrix for the Euclidean plane and (x, y, 1) be any point in the Euclidean plane. Then


Since the last matrix must be the matrix of a point in the Euclidean plane, we must have a 31 x + a 32 y + a 33 = 1 for every point (x, y, 1) in the Euclidean plane. In particular, the point (0, 0, 1) must satisfy the equation. Hence, a 33 = 1. Further, the points (0, 1, 1) and (1, 0, 1) satisfy the equation and imply a 32 = 0 dan a 31 = 0, respectively. Therefore, the transformation matrix must have the form

which motivates the following definition.

Definition. Seorang affine transformation of the Euclidean plane , T, is a mapping that maps each point X of the Euclidean plane to a point T( X) of the Euclidean plane defined by T(X) = AX where det(A) is nonzero and

where each a ij is a real number.

Exercise 3.19. Prove that every affine transformation of the Euclidean plane has an inverse that is an affine transformation of the Euclidean plane . (Hint. Write the inverse by using the adjoint. Refer to a linear algebra text.)

Proposition 3.3. An affine transformation of the Euclidean plane is a transformation of the Euclidean plane.

Exercise 3.20. Prove Proposition 3.3.

Proposition 3.4. The set of affine transformations of the Euclidean plane form a group under matrix multiplication.

Bukti. Since the identity matrix is clearly a matrix of an affine transformation of the Euclidean plane and the product of matrices is associative, we need only show closure and that every transformation has an inverse.
Biarkan A dan B be the matrices of affine transformations of the Euclidean plane. Since det(A) and det(B) are both nonzero, we have that det(AB) = det(A) · det(B) is not zero. Also,

is a matrix of an affine transformation of the Euclidean plane. (The last row of the matrix is 0 , 0, 1.) Hence closure holds.
Complete the proof by showing the inverse property.//

Exercise 3.21. Given three points P(0, 0, 1), Q(1, 0, 1), and R(2, 1, 1), and an affine transformation T. (a) Find the points P' = T( P), Q' = T(Q), and R' = T(R) where the matrix of the transformation is . (b) Sketch triangle PQR and triangle P'Q'R' . (c) Describe how the transformation moved and changed the triangle PQR.

Exercise 3.22. Find the matrix of an affine transformation that maps P(0, 0, 1) to P'(0, 2, 1), Q(1, 0, 1) to Q'(2, 1, 1), and R(2, 3, 1) to R'(7, 9, 1).

Exercise 3.23. Show the group of affine transformations of the Euclidean plane is not commutative.


9.1: The Matrix of a Linear Transformation

The main objective of principal components analysis (PC) is to reduce the dimension of the observations. The simplest way of dimension reduction is to take just one element of the observed vector and to discard all others. This is not a very reasonable approach, as we have seen in the earlier chapters, since strength may be lost in interpreting the data. In the bank notes example we have seen that just one variable (e.g. = length) had no discriminatory power in distinguishing counterfeit from genuine bank notes. Kaedah alternatif adalah menimbang semua pemboleh ubah secara sama, iaitu untuk mempertimbangkan purata sederhana semua elemen dalam vektor. Ini sekali lagi tidak diingini, kerana semua elemen dianggap sama pentingnya (berat).

Pendekatan yang lebih fleksibel adalah mengkaji purata wajaran, iaitu

Vektor pemberat kemudiannya dapat dioptimumkan untuk menyelidiki dan mengesan ciri-ciri tertentu. Kami memanggil (9.1) gabungan linear standard (SLC). SLC mana yang harus kita pilih? Salah satu tujuannya adalah untuk memaksimumkan varians unjuran, iaitu memilih mengikut

"Petunjuk" yang menarik dijumpai melalui penguraian spektrum matriks kovarians. Sesungguhnya, dari Teorem 2.5, arahan diberikan oleh eigenvector yang sesuai dengan nilai eigen terbesar dari matriks kovarians.

Gambar 9.1 dan 9.2 menunjukkan dua unjuran sedemikian (SLC) set data yang sama dengan min sifar. Dalam Rajah 9.1, unjuran sewenang-wenang ditunjukkan. Tetingkap atas menunjukkan awan titik data dan garis ke mana data diproyeksikan. Tetingkap tengah menunjukkan nilai yang diunjurkan ke arah yang dipilih. Tetingkap bawah menunjukkan varians unjuran sebenar dan peratusan jumlah varians yang dijelaskan.

Rajah 9.2 menunjukkan unjuran yang menangkap sebahagian besar varians dalam data. Arah ini menarik dan terletak di sepanjang arah utama awan titik. Garis pemikiran yang sama dapat diterapkan pada semua data ortogonal ke arah ini yang menuju ke eigenvector kedua. SLC dengan varians tertinggi yang diperoleh dari memaksimumkan (9.2) adalah komponen utama pertama (PC). Orthogonal ke arah yang kita dapati SLC dengan varians tertinggi kedua:, PC kedua.

Melanjutkan dengan cara ini dan menulis dalam notasi matriks, hasilnya untuk pemboleh ubah rawak dengan dan merupakan transformasi PC yang didefinisikan sebagai

Di sini kita telah memusatkan pemboleh ubah untuk mendapatkan pemboleh ubah PC min sifar.

Transformasi PC demikian

Jadi komponen utama yang pertama adalah

Mari kita hitung perbezaan PC ini dengan menggunakan formula (4.22) - (4.26):

Ini dapat dinyatakan secara lebih umum dan diberikan dalam teorema seterusnya.

Hubungan antara transformasi PC dan pencarian SLC terbaik dibuat dalam teorema berikut, yang diikuti secara langsung dari (9.2) dan Teorem 2.5.


Penguraian Transformasi Matriks: Struktur Eigen dan Bentuk Kuadratik

5.1 PENGENALAN

Pada bab sebelumnya, kita membincangkan pelbagai kes transformasi matriks khas, seperti putaran, pantulan, dan peregangan, dan menggambarkan kesannya secara geometri. Kami juga menunjukkan kesan geometri pelbagai komposit transformasi, seperti putaran diikuti oleh regangan.

Walau bagaimanapun, motivasi untuk bab ini bertentangan dengan Bab 4. Di sini kita mulakan dengan transformasi matriks sewenang-wenangnya dan mempertimbangkan cara untuk mengurai ia menjadi produk matriks yang lebih sederhana dari sudut geometri. Oleh itu, objektif kami adalah untuk menyediakan, sebagian, seperangkat pendekatan pelengkap kepada yang digambarkan dalam Bab 4.

Mengadopsi sudut pandang terbalik ini membolehkan kita memperkenalkan sejumlah konsep penting dalam analisis multivariat-nilai eigen dan eigen vektor, sifat eigen struktur matriks simetri dan tidak simetri, penguraian nilai tunggal dari matriks dan bentuk kuadratik. Bahan baru ini, bersama dengan tiga bab sebelumnya, harus memberikan sebahagian besar latar belakang untuk memahami operasi vektor dan matriks dalam analisis multivariate. Lebih-lebih lagi, kita akan mengkaji konsep yang dibahas sebelumnya, seperti peringkat matriks, terbalik matriks, dan singulariti matriks, dari perspektif lain-satu yang diambil dari konteks struktur eigen.

Mencari struktur eigen dari matriks persegi, seperti mencari terbalik, hampir menjadi perkara rutin pada zaman komputer sekarang. Walaupun begitu, nampaknya berguna untuk membincangkan jenis pengiraan yang terlibat walaupun kita mengehadkan matriks kecil tertib 2 × 2 atau 3 × 3. Dengan cara ini kita dapat menggambarkan banyak konsep ini secara geometri dan juga angka.

Oleh kerana topik struktur eigen boleh menjadi agak rumit, kita mulakan bab ini dengan perbincangan gambaran keseluruhan mengenai struktur eigen di mana nilai eigen dan vektor eigen dapat dijumpai dengan mudah dan cepat. Penekanan di sini adalah untuk menggambarkan geometri aspek struktur eigen yang berkaitan dengan jenis perubahan vektor dasar khas yang menjadikan sifat pemetaan sesederhana mungkin, misalnya, sebagai peregangan relatif terhadap set vektor dasar yang sesuai.

Perlakuan yang sederhana dan deskriptif ini juga memungkinkan kita mengikat bahan semasa mengenai struktur eigen dengan perbincangan dalam Bab 4 yang berpusat pada transformasi vektor titik dan dasar. Dengan berbuat demikian, kita kembali ke contoh berangka yang ditunjukkan dalam Bahagian 4.3 dan memperoleh struktur eigen matriks transformasi yang dijelaskan di sana.

Bahagian utama bab seterusnya meneruskan perbincangan mengenai struktur eigen, tetapi sekarang dalam konteks analisis multivariate. Untuk memperkenalkan pendekatan pelengkap ini - satu berdasarkan mencari komposit linier sehingga varians unjuran titik ke atasnya maksimum - kita kembali ke masalah berangka kecil yang diambil dari data sampel Jadual 1.2. Kami menganggap bahawa kami mempunyai satu set skor dua belas pekerja yang diperbetulkan min X1 (sikap terhadap syarikat) dan X2 (bilangan tahun bekerja oleh syarikat). Masalahnya ialah mencari gabungan linear dari dua skor berasingan yang menunjukkan perbezaan maksimum antara individu. Motivasi ini membawa kepada perbincangan mengenai struktur eigen matriks yang melibatkan simetrik matriks dan teknik multivariat analisis komponen utama.

Bahagian utama seterusnya dari bab ini membincangkan pelbagai sifat struktur eigen matriks. Kes matriks simetrik yang lebih biasa (dengan entri bernilai sebenar) dibincangkan secara terperinci, sementara kes yang lebih kompleks yang melibatkan struktur eigen matriks tidak simetri dijelaskan dengan lebih ringkas. Hubungan struktur eigen dengan peringkat matriks juga dijelaskan di sini.

Penguraian nilai tunggal matriks sama ada segi empat sama atau segi empat tepat dan hubungannya dengan dekompositon matriks adalah konsep pusat lain dalam prosedur multivariate. Oleh itu, perhatian berpusat pada topik ini, dan perbincangan juga berkaitan dengan bahan yang dibahas dalam Bab 4. Di sini, bagaimanapun, kami memberi tumpuan kepada penguraian matriks menjadi produk matriks lain yang secara individu menunjukkan interpretasi geometri yang agak sederhana.

Bentuk kuadratik seterusnya diambil dan berkaitan dengan bahan sebelumnya. Lebih-lebih lagi, perbincangan tambahan mengenai struktur eigen matriks nonsimetri persegi, yang berkaitan dengan teknik multivariate seperti analisis diskriminasi berganda dan korelasi kanonik, dikemukakan dalam konteks masalah sampel ketiga dalam Bab 1.

Oleh itu, jika inversi matriks dan kedudukan matriks penting dalam regresi linier dan prosedur yang berkaitan untuk mengkaji kriteria tunggal, persatuan peramal berganda, struktur eigen matriks dan bentuk kuadratik adalah konsep penting dalam menangani pelbagai kriteria, hubungan pelbagai peramal.


Tonton videonya: 14 1 Transformasi Linier (Ogos 2022).