S

Fred E. Szabo PhD , in The Linear Algebra Survival Guide, 2015

Scalar Multiple of a Matrix

Matrices can be multiplied by scalars componentwise. The result of multiplying a matrix by a scalar is called a scalar multiple of the matrix. In Mathematica, placing a scalar to the left of a matrix with a space in between defines scalar multiplication.

Illustration

Scalar multiple of a matrix

MatrixForm[A   =   {{i, ii, 3}, {4, 5, 6}}]

1 2 3 4 5 six

MatrixForm [s A]

s 2 s three s 4 s 5 s 6 south

Every element in the matrix A is multiplied past the scalar s.

Illustration

Multiplication past the scalar 1

vector   =   Range[five]; scalar   =   one;

1 vector

{ane, two, three, four, five}

one vector == vector

True

Manipulation

Scalar multiple of a 3-by-2 matrix

Articulate[a]

Manipulate [MatrixForm[a {{1, 2, 3}, {iv, 5, 6}}], {a, −   10, 10, ane}]

We use Manipulate and MatrixForm to explore the scalar products of a two-by-3 matrix and display the result in 2- dimensional form. For instance, if a   =     9, so the scalar production

a {{one, 2, 3}, {4, five, half-dozen}}

is the matrix

9 18 27 36 45 54

Read full affiliate

URL:

https://www.sciencedirect.com/science/article/pii/B9780124095205500266

Systems of linear differential equations

Henry J. Ricardo , in A Mod Introduction to Differential Equations (Third Edition), 2021

half dozen.7.2 The impossibility of dependent eigenvectors

If one of the eigenvectors is a scalar multiple of the other—say V 2 is a multiple of 5 1 —then the expression in (6.7.1) collapses to a scalar multiple of 5 1 and there is only one arbitrary constant. This expression can't represent the general solution of a second-order equation.

Fortunately, this collapse can't happen nether our electric current supposition. It is piece of cake to testify that if a ii × 2 matrix A has distinct eigenvalues λ 1 and λ 2 with corresponding eigenvectors 5 ane and V two , so neither eigenvector is a scalar multiple of the other. Suppose that V two = c V 1 , where c is a nonzero scalar. Then V ii c V 1 = 0 , the zero vector, and we must have

0 = A ( V 2 c V 1 ) = A V 2 c ( A V 1 ) = λ 2 V ii c ( λ one 5 i ) = λ 2 ( c V one ) c ( λ 1 V 1 ) = c ( λ two λ ane ) V i .

Merely then, considering c 0 and V ane (as an eigenvector) is nonzero, we must conclude that ( λ 2 λ 1 ) = 0 , which contradicts the assumption that we have distinct eigenvalues.

Read total chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128182178000130

Automorphism Groups

In C*-Algebras and their Automorphism Groups (Second Edition), 2018

7.11.eleven Theorem

Let ( Thousand , G , α ) be a W -dynamical system where G is detached.

If Yard α Yard is a gene, then G is ergodic on the center of M . The antipodal holds if G acts centrally freely on G .

Proof

If Yard α M is a factor, then each central fixed point in M is a scalar multiple of one by 7.11.4.

Assume at present that Yard acts centrally freely on K and take y in the heart of 1000 α M . Then y ι ( M ) , and so that y = ι ( x ) by 7.eleven.ten, where x Z . Moreover, by vii.11.iv x ( = π ( y ) ) is a fixed point for G. Thus, if G is ergodic on Z , then y is a scalar multiple of one. □

Read total affiliate

URL:

https://www.sciencedirect.com/science/article/pii/B9780128141229000076

Systems of Linear Equations

Stephen Andrilli , David Hecker , in Uncomplicated Linear Algebra (5th Edition), 2016

Row Operations and Their Notation

There are three operations that we are allowed to use on the augmented matrix in Gaussian Emptying. These are every bit follows:

Row Operations

(I)

Multiplying a row past a nonzero scalar

(II)

Calculation a scalar multiple of one row to some other row

(III)

Switching the positions of 2 rows in the matrix

To salve space, we will apply a shorthand notation for these row operations. For instance, a row operation of Blazon (I) in which each entry of row three is multiplied by 1 2 times that entry is represented past

(I): three 1 2 3 .

That is, each entry of row three is multiplied by 1 2 , and the upshot replaces the previous row 3. A Type (Ii) row performance in which (−3)×(row 4) is added to row 2 is represented by

(Two): 2 3 4 + 2 .

That is, a multiple (− 3, in this case) of one row (in this case, row 4) is added to row 2, and the outcome replaces the previous row 2. Finally, a Type (III) row operation in which the second and third rows are exchanged is represented past

(III): 2 3 .

(Note that a double pointer is used for Blazon (Iii) operations.)

We now illustrate the utilise of the outset two operations with the following example:

Example two

Permit us solve the following arrangement of linear equations:

v ten 5 y 15 z = forty 4 x ii y 6 z = 19 iii x six y 17 z = 41 .

The augmented matrix associated with this organization is

5 5 15 four ii 6 three 6 17 40 xix 41 .

We now perform row operations on this matrix to give information technology a simpler form, proceeding through the columns from left to right. Showtime column: We choose the (1,i) position as our commencement pivot entry. We want to place a ane in this position. The row containing the current pivot is often referred to equally the pivot row, then row 1 is currently our pin row. Now, when placing one in the matrix, nosotros generally use a Type (I) operation to multiply the pivot row by the reciprocal of the pivot entry. In this case, we multiply each entry of the commencement row by 1 5 .

Row Operation Resulting Matrix (I): ane ane 5 i 1 3 4 ii 6 3 half-dozen 17 8 xix 41

For reference, we circumvolve all pin entries equally nosotros go on. Adjacent we want to convert all entries below this pivot to 0. Nosotros volition refer to this as "targeting" these entries. Every bit each entry is inverse to 0 it is called the target, and its row is called the target row. To change a target entry to 0, we always use the following Type (Two) row operation:

(Ii): target row ( target value ) × pivot row + target row

For example, to naught out (target) the (2,1) entry, we utilise the Type (II) operation 2 ( iv ) × 1 + 2 . (That is, nosotros add (−4) times the pivot row to the target row.) To perform this operation, we commencement do the post-obit side calculation:

The resulting sum is now substituted in place of the one-time row 2.

Note that even though we multiplied row 1 by −four in the side adding, row 1 itself was not changed in the matrix. Only row 2, the target row, was contradistinct by this Type (Ii) row operation. Similarly, to target the (i,3) position (that is, catechumen the (1,3) entry to 0), row 3 becomes the target row. We use the Type (Ii) operation three ( 3 ) × i + 3 . The side calculation involved is:

The resulting sum is now substituted in place of the quondam row iii.

Row Operation Resulting Matrix (II): 3 ( three ) × one + iii 1 3 0 2 half dozen 0 3 8 8 13 17

Our work on the start column is finished. The concluding matrix is associated with the linear system

x y 3 z = 8 2 y + 6 z = 13 3 y 8 z = 17 .

Note that x has been eliminated from the second and third equations, which makes this system simpler than the original. Nevertheless, as nosotros will prove later, this new system has the same solution fix.

Second column: The pivot entry for the second column must be in a lower row than the previous pivot, so we choose the (2,2) position equally our side by side pivot entry. Thus, row 2 is now the pivot row. Nosotros first perform a Type (I) operation on the pivot row to catechumen the pivot entry to one. Multiplying each entry of row ii by 1 2 (the reciprocal of the pivot entry), nosotros obtain

Row Functioning Resulting Matrix (I) : two 1 ii ii 1 1 iii 0 iii 0 three 8 8 13 2 17 .

Next, nosotros target the (iii,ii) entry, so row 3 becomes the target row. We employ the Type (2) operation 3 iii × 2 + 3 . The side adding is as follows:

The resulting sum is at present substituted in place of the old row 3.

Our work on the 2d column is finished. The last matrix corresponds to the linear system

10 y 3 z = viii y + three z = 13 two z = five ii .

Notice that y has been eliminated from the third equation. Again, this new system has exactly the same solution set as the original system.

Third cavalcade: The pivot entry for the third cavalcade must exist in a lower row than the previous pivot, and so we choose the (3,iii) position as our next pin entry. Thus, row 3 is now the pivot row. Withal, the pin entry already has the value 1, and and then no Type (I) operation is required. Also, there are no more rows below the pin row, and so there are no entries to target. Hence, we need no further row operations, and the final matrix is

one i 3 0 1 three 0 0 one 8 13 2 v 2 ,

which corresponds to the last linear organization given above. Determination: At this point, we know from the third equation that z = v 2 . Substituting this result into the 2nd equation and solving for y, we obtain y + iii ( 5 two ) = xiii 2 , and hence, y  =   1. Finally, substituting these values for y and z into the first equation, we obtain x one 3 ( five 2 ) = 8 , and hence ten = 3 two . This process of working backwards through the gear up of equations to solve for each variable in turn is called back substitution.

Thus, the final system has a unique solution — the ordered triple 3 ii , 1 , v 2 . Yet, nosotros can cheque by substitution that 3 2 , 1 , 5 2 is also a solution to the original system. In fact, Gaussian Elimination ever produces the complete solution set, and so 3 ii , i , 5 two is the unique solution to the original linear system.

Read full chapter

URL:

https://www.sciencedirect.com/science/commodity/pii/B9780128008539000025

Handbook of Algebra

Henk C.A. van Tilborg , in Handbook of Algebra, 1996

Lemma 1.4

Let 0 ≤ α ≤ 0.v. Then

(1) i = 0 α n ( northward i ) = ii ( h ( α ) + 0 ( 1 ) ) northward , north .

This chapter is organized in the post-obit way. In Department 2 the basic concepts of block codes are explained. Projective codes (no coordinate is a scalar multiple of another coordinate) of maximal size are constructed and an of import relation between a linear code and its orthogonal complement is given. In Section 3 it is shown that ideals in the residue class band of q-ary polynomials modulo xn − one define a very large class of codes. The zeros of the generator polynomial of such an platonic make up one's mind their error-correcting adequacy. In Department 4 generalizations of cyclic code are given by means of algebraic geometry. They lead to a powerful fault-correcting algorithm and to codes that are asymptotically very interesting and may soon fifty-fifty be of practical value. InDepartment 5, a cursory discussion of the available books on coding theory volition be given.

Read full chapter

URL:

https://www.sciencedirect.com/scientific discipline/article/pii/S1570795496800155

Finite-dimensional Lie algebras

N. Sthanumoorthy , in Introduction to Finite and Space Dimensional Lie (Super)algebras, 2016

1.9 Root system in Euclidean spaces and root diagrams

Allow G be a semisimple Lie algebra (over algebraically closed field of feature 0), H be a maximal toral subalgebra, Φ = { α i , α ii , , α northward } H * exist the fix of roots of G and G = H + α Φ G α be the root space decomposition. Let Q be the set of all rational numbers, R be the set of all real numbers, and E Q be the Q-subspace of H* spanned by all roots. We have northward = dim H * . If Q is the base of operations field, and then nosotros can extend the base field to R with E beingness the corresponding real vector space. That is, East = R Q Due east Q . Hence E is an Euclidean space. Φ contains a ground of E and dimension of E is n.

The post-obit results can be established [40]:

(a)

Φ spans Eastward and 0 does not vest to Φ.

(b)

If αΦ and then − αΦ , but no other scalar multiple of α is a root.

(c)

If α,βΦ, then β 2 ( β , α ) ( α , α ) α Φ .

(d)

If α,βΦ, and then 2 ( β , α ) ( α , α ) Z .

Definition 37

A reflection in an Euclidean space Eastward is an invertible linear transformation leaving pointwise stock-still some hyperplane (subspace of codimension one) and sending any vector orthogonal in that hyperplane into its negative.

A reflection preserves the inner product on E. Any nonzero vector α determines a reflection σ α with reflecting hyperplane P α = {βE|(β,α) = 0}.

So we go the following:

σ α ( β ) = β 2 ( β , α ) ( α , α ) α .

Denote by β , α = 2 ( β , α ) ( α , α ) . And then σ α (β) = β −〈β,αα.

A subset Φ of the Euclidean space E is called a root arrangement in Eastward [40] if

(one)

Φ is finite, spans Due east, and does not contain 0.

(2)

If αΦ, the only multiplies of α in Φ are ± α.

(iii)

If αΦ, the reflection σ α leaves Φ invariant.

(iv)

If α,βΦ, and so β , α = 2 ( β , α ) ( α , α ) Z

Let Φ announce a root organisation of rank n in an Euclidean space Due east. A subset △ of Φ is called a base if

(1)

△ is a footing of E.

(2)

Each root β can exist written as β = α k α α ( α ) with all integral coefficients k α being nonnegative or all nonpositive.

The roots in △ are called simple. The height of the above root β is α k α . A root organization Φ is called irreducible if it cannot be partitioned into the union of 2 proper subsets such that each root in one set is orthogonal to each root in other set up.

Remark 26

(1)

Allow V exist an north-dimensional vector infinite over a field F. The dual space of V, denoted by V*, is the set of all linear maps from V to F. If f,grand5* so f + g and λf for λF are divers by (f + m)v = f(five) + thou(five) for v5 and (λf)(v) = λf(v). Given a basis {five ane,v 2,…,v n } of a vector space 5, one tin can define associated dual ground of V as follows: Let f i : V F be the linear map defined by

f i ( v j ) = 1 for i = j 0 for i j .

One tin can check that {f 1,…,f northward } is a basis of V* and it is a dual to the basis {v 1,v 2,…,v n } of V.
(2)

Dual root system: Let E be an inner product infinite and R be the root organisation. Then one can verify that R ^ = 2 α ( α , α ) : α R is also a root system in East. Besides one can verify that Cartan matrix of R ^ is the transpose of Cartan matrix of R. Here R ^ is the dual root system to R. One tin prove that Weyl group of R and R ^ are isomorphic.

Consider the infinite R n + one with the Euclidean inner production. Permit ϵ i exist the vector in the Euclidean space with ith entry 1 and all other entries are zero. Now define

R = { ± ( ϵ i ϵ j ) : 1 i < j 50 + 1 } .

Ane can show that R is an root arrangement in Due east. Let R exist a root system in the real inner product space E. Let α,βR with β≠ ± α. And so it can be proved that

α , β β , α { 0 , ane , 2 , 3 } .

Now one can testify that in that location are merely few possibilities for 〈α,β〉. If we take 2 roots α,β in R with α≠ ± β and 〈β,β〉≥ 〈α,α〉, then we have

| β , α | = | 2 ( β , α ) | ( α , α ) | 2 ( α , β ) | ( β , β ) = | α , β | .

Let α and β be two roots in E. Cosine of the bending θ between vectors α,βE is given by the formula, α β cos θ = ( α , β ) .

So

β , α = ii ( β , α ) ( α , α ) = 2 α β cos θ α two = 2 β α cos θ .

Similarly

α , β = 2 α β cos θ .

Hence

α , β β , α = 4 cos 2 θ ,

which is a nonnegative integer. As 0 cos ii θ 1 , α,β〉 and 〈β,α〉 have aforementioned sign.

So when α≠ ± β and ∥β∥≥∥α∥, nosotros accept Table 1.1.

Tabular array ane.1. Angles between root vectors as explained above

〈 α , β 〉 〈 β , α 〉 four cos 2 θ = &lt; α , β &gt; &lt; β , α &gt; θ β , β α , α = β ii α ii
0 0 0 π/2 Undetermined
1 one i π/iii ane
−1 −1 one 2π/iii 1
i 2 2 π/iv 2
−1 −ii 2 3π/4 2
1 3 3 π/6 3
−1 −iii 3 5π/6 3

Backdrop of Root diagrams:

We take the following properties of the root system Φ.

Let α,βΦ.

(a)

If the angle between α and β is strictly birdbrained, then α + βΦ.

(b)

If the bending between α and β is strictly acute and (β,β) ≥ (α,α), then αβΦ.

Permit East = R with the Euclidean inner product:

(one)

The rank of the root arrangement 50 is 1. Now there is just one possibility, namely, (A 1) as shown below.

(2)

At present we consider rank l = 2. In this case, there are iv possibilities.

(i)

Let θ = π/2. Using the backdrop of root diagrams, nosotros can discover Φ, which contains iv roots every bit shown below. In this case, root organisation is said to be of type A one × A 1.

(ii)

Allow θ = 2π/3. Using the properties of root diagrams, we can find Φ, which contains six roots every bit shown beneath. In this case, root system is said to be of blazon A two.

(iii)

Allow θ = iiiπ/four. Using the properties of root diagrams, α + β and 2α + β are roots. Root diagram is shown below. In this instance, root arrangement is of type B 2.

(iv)

Let θ = vπ/6. The positive root system is {α,β,α + β,αβ,2α + β,α + 2β}({α,β,α + β,2α + β,−α + β,−(α + twoβ)}). Root diagram is shown below. This root system is of type G ii.

Remark 27

In each example, one can bank check the axioms directly and make up one's mind Weyl group W.

Definition 38 (Reduced root arrangement and Chevalley's normalization)

Allow V be a vector space and Five* be its dual. And so 1 can ascertain a symmetry Southward α to be an automorphism (i.east., S α : 5 5 ) of V such that

(i)

South α (α) = −α and

(ii)

the set H of elements of V fixed by Southward α is a hyperplane of Five. That is, H = {αV |Due south α (α) = α} is a hyperplane of V.

A subset R of a vector space V is said to be a root organisation in Five if

(a)

R is finite, spans V, and does non comprise 0,

(b)

The orthogonal transformations S α ( β ) = β 2 < β , α > | α | 2 α , for αR transforms R to itself.

(c)

2 ( β , α ) | α | two is an integer if α and β are in R.

This root system R is said to be reduced if for each αR,α and − α are simply roots proportional to α, (hence 2αR).

The following properties of the reduced system can be verified: Let R exist a reduced root organization.

(one)

For a semisimple Lie algebra K , the root system is isomorphic to R.

(2)

For Yard to be a simple Prevarication algebra, it is necessary and sufficient that R should be irreducible.

(three)

Chevalley's normalization [15]: For each αR, cull a nonzero element x α G α , such that

[ x α , x β ] = North α , β 10 α + β if α + β R 0 if α + β R and α + β 0 .

Hither N α,β is a nonzero scalar. The coefficients N α,β determine the multiplication table of One thousand .

One can choose the elements x α and then that

[ x α , x α ] = H α for all α R and North α β = Due north α , β for α , β , α + β R

In this example, for α,βR with α + βR, let p exist the greatest integer such that βR. And so N α,β = ±(p + 1).

Remark 28

One tin also refer to Cartan-Weyl basis in Section ane.17.

Read full chapter

URL:

https://www.sciencedirect.com/scientific discipline/commodity/pii/B9780128046753000017

Frontward models for EEG

C. Phillips , ... Grand. Friston , in Statistical Parametric Mapping, 2007

Deflation technique

Past bold that the unit eigenvalue of B is elementary, it can easily exist shown that any other solution will merely differ by an additive constant, i.due east., a scalar multiple of 1 nv. Let p exist any vector such that 1t nv p = 1 and suppose that we seek the solution of Eqn. 28.35 such that p t v = 0. So looking for this particular solution, Eqn. 28.35 becomes:

Nether the assumption that p t v = 0, the matrix C = (B – anenv p t ) is a deflation of B and has no unit eigenvalue, and then that (InvC)−1 = (InvB – inv p t )−1 exists. Eqn. 28.35 can exist rewritten as:

28.42 [ v i v ii five 3 ] = [ C 11 C 12 C 13 C 21 C 22 C 23 C 31 C 32 C 33 ] [ v 1 v 2 v three ] + [ G 1 K 2 G 3 ] [ j ]

and this system of equations can be solved by calculating:

28.43 v = ( I North v C ) ane One thousand j= ( I Northward five B 1 Due north v p t ) i G j

where 5 satisfies p tv = 0.

Each vector 5 ˙ is of size Northward × ane, so if, for case, p is defined past:

28.44 p = [ 0 0 0 N v 1 0 0 0 Northward v 2 p p p N v 3 ] t

with p = i/Nv3, then p t v = 0 merely ways that the mean of v 3 is cypher. Therefore Eqn. 28.43 provides us with the solution that is mean corrected over the scalp surface.

Read total affiliate

URL:

https://www.sciencedirect.com/science/commodity/pii/B9780123725608500280

Finite Dimensional Vector Spaces

Stephen Andrilli , David Hecker , in Elementary Linear Algebra (Fourth Edition), 2010

Some Elementary Properties of Vector Spaces

The next theorem contains several simple results regarding vector spaces. Although these are plain true in the most familiar examples, we must prove them in general before nosotros know they hold in every possible vector space.

Theorem 4.1

Let V be a vector space. Then, for every vector five in Five and every real number a, we have

(1) a 0 = 0 Whatever scalar multiple of the null vector yields the zero vector.
(two) 0five = 0 The scalar zero multiplied past any vector yields the zero vector.
(three) (−ifive = −5 The scalar −1 multiplied by any vector yields the condiment changed of that vector.
(4) If a v = 0, then a = 0 or v = 0. If a scalar multiplication yields the zero vector, then either the scalar is nothing, or the vector is the zippo vector, or both.

Part (3) justifies the notation for the additive inverse in property (4) of the definition of a vector space and shows nosotros do not demand to distinguish between −v and (−1)5.

This theorem must be proved directly from the properties in the definition of a vector space because at this betoken we have no other known facts virtually general vector spaces. Nosotros bear witness parts (1), (3), and (4). The proof of part (2) is similar to the proof of office (ane) and is left every bit Exercise 18.

Proof

(Abridged):

Part (i): Past direct proof,

a 0 = a 0 + 0 by property (3) = a 0 + ( a 0 + ( [ a 0 ] ) ) by property ( 4 ) = ( a 0 + a 0 ) + ( [ a 0 ] ) by property ( ii ) = a ( 0 + 0 ) + ( [ a 0 ] ) past belongings (v) = a 0 + ( [ a 0 ] ) by holding (iii) = 0 . by belongings (4)

Part (3): Start, note that v + (−1)v = 1v + (−1)five (past property (eight)) =(1 + (−1))five (by holding (6)) =0v = 0 (by part (ii) of Theorem 4.ane). Therefore, (−ane)v acts as an additive inverse for v. We will finish the proof past showing that the additive changed for v is unique. Hence, (−1)v will be the additive changed of 5.

Suppose that x and y are both additive inverses for 5. Thus, x + 5 = 0 and 5 + y = 0. Hence,

x = x + 0 = 10 + ( v + y ) = ( x + v ) + y = 0 + y = y .

Therefore, any ii additive inverses of 5 are equal. (Note that this is, in essence, the aforementioned proof we gave for Theorem 2.ten, the uniqueness of inverse for matrix multiplication. You should compare these proofs.)

Function (4): This is an "If A so B or C" statement. Therefore, we assume that a v = 0 and a ≠ 0 and prove that v = 0. At present,

v = i v by holding (8) = ( 1 a · a ) five because a 0 = ( 1 a ) ( a 5 ) by property ( 7 ) = ( 1 a ) 0 considering a v = 0 = 0 . by part ( 1 ) of Theorem four .i

Theorem four.ane is valid fifty-fifty for unusual vector spaces, such every bit those in Examples 7 and viii. For instance, part (4) of the theorem claims that, in full general, a v = 0 implies a = 0 or five = 0. This argument can quickly be verified for the vector space 5 = + with operations ⊕ and ⊙ from Instance 7. In this case, a5 = five a, and the nothing vector 0 is the real number 1. So, function (4) is equivalent here to the true statement that v a = 1 implies a = 0 or v = 1.

Applying parts (2) and (3) of Theorem 4.1 to an unusual vector space V gives a quick way of finding the zip vector 0 of V and the additive inverse −v for any vector five in V . Forinstance, in Example eight, we have V = 2 with scalar multiplication a ⊙ [10, y] = [ax + a − i, ay − 2a + 2]. To find the zero vector 0 in V , we simply multiply the scalar 0 past any general vector [x, y] in V :

0 = 0 [ x , y ] = [ 0 ten + 0 ane , 0 y 2 ( 0 ) + 2 ] = [ i , two ] .

Similarly, if [ x , y ] V , and then −ane ⊙ [x, y] gives the additive changed of [x, y].

[ x , y ] = 1 [ x , y ] = [ 1 x + ( i ) one , 1 y 2 ( i ) + two ] = [ ten 2 , y + 4 ] .

Read total chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123747518000202

Vectors in Geometry

Walter Meyer , in Geometry and Its Applications (Second Edition), 2006

EXERCISES WITH A GRAPHING UTILITY

16.

In Exercise 8, what is the effect of changing the length of the given tangent vector v 0 ? Use a graphing utility to make a serial of plots for different scalar multiples of the original five 0 [for example, attempt 2five 0 and (−1)v0 ].

17.

Suppose an artist wants two cubic spline curves to fit the points P 0, P 1, P 2 as given in Example 5.24. Tangent vectors 5 0 and five two are the same every bit in that example, merely the artist has the freedom to choose v 1. She wants to do it and so that the curve fabricated by the two splines passes through the points as straight as possible with no unnecessary curving or looping effectually. She begins with v 1 =(4, 2) and draws the graph, only and so she compares the following alternatives (which are each ninety° around from the previous one): v 1 = 〈−2, four〉, five ane = 〈−4, −ii〉, v ane = 〈2, −4〉. Use a graphing utility to acquit out these experiments. Which v 1 would the creative person choose?

18.

The artist of Exercise 17 is interested in beingness able to brand a choice of five ane according to some formula and so that she doesn't have to carry trial-and-error experiments − as in the last do − for every new trouble. The formula may involve p0, p1, p2, v0, 5two. Experiment with some formulas yous could apply to calculate 5 1. With a graphing utility, study whether or not they achieve the objective of "directness" in some specific cases you devise.

19.

(a) With a graphing utility, plot the splines in Exercise 11.

(b)

In Practice 11, suppose you had mistakenly used − five i as the first tangent vector for the 2nd spline while correctly using the given v one for the second tangent vector of the showtime spline. Try to predict the difference it would make to the overall shape formed by the ii splines. Make a crude sketch.

(c)

With the graphing utility, make a precise picture of the overall shape that results from the error described in part (b).

Read total affiliate

URL:

https://www.sciencedirect.com/science/commodity/pii/B9780123694270500062

Finite Dimensional Vector Spaces

Stephen Andrilli , David Hecker , in Elementary Linear Algebra (Fifth Edition), 2016

Some Unproblematic Backdrop of Vector Spaces

The next theorem contains several simple results regarding vector spaces. Although these are evidently true in the most familiar examples, nosotros must prove them in general before we know they hold in every possible vector space.

Theorem four.1

Allow Five be a vector space. And so, for every vector v in Five and every real number a, we take

( 1 ) a 0 = 0 Any of the aught vector yields the zilch vector. ( 2 ) 0 v = 0 The scalar zero multiplied by any vector yields the zilch vector. ( three ) 1 five = v The scalar -1 multiplied by whatsoever vector yields the additive inverse of that vector. ( 4 ) If a v = 0 , then If a scalar multiplication yields the zero vector, a = 0 or v = 0 and then either the scalar is zero, or the vector is the nix vector, or both.

Office (three) justifies the notation for the condiment changed in property (4) of the definition of a vector infinite and shows we do not demand to distinguish between −5 and (−1)5.

This theorem must be proved directly from the properties in the definition of a vector space because at this point nosotros accept no other known facts about general vector spaces. We prove parts (1), (3), and (4). The proof of part (2) is like to the proof of role (one) and is left equally Exercise 17.

Proof (Abridged)

Function (one): By direct proof,

a 0 = a 0 + 0 by property (three) = a 0 + a 0 + ( [ a 0 ] ) by property (4) = a 0 + a 0 ) + ( [ a 0 ] by property (two) = a ( 0 + 0 ) + ( [ a 0 ] ) by property (v) = a 0 + ( [ a 0 ] ) by property (3) = 0 . by property (4)

Part (3): Get-go, note that 5 + (−1)five = 15 + (−i)v (by property (eight)) = (1 + (−1))five (by holding (half-dozen)) = 0v = 0 (past Part (2) of Theorem four.1). Therefore, (−1)five acts every bit an additive inverse for v. We volition cease the proof by showing that the additive inverse for 5 is unique. Hence (−1)5 will be the additive changed of v. Suppose that x and y are both additive inverses for v. Thus, x + five = 0 and v + y = 0. Hence,

10 = x + 0 = x + ( v + y ) = ( x + v ) + y = 0 + y = y .

Therefore, any 2 additive inverses of 5 are equal. (Annotation that this is, in essence, the same proof we gave for Theorem 2.xi, the uniqueness of inverse for matrix multiplication. You lot should compare these proofs.)

Part (4): This is an "If A then B or C" statement. Therefore, we assume that a five = 0 and a≠0 and evidence that v = 0. Now,

five = 1 v by property (8) = 1 a a v because a 0 = 1 a a five past belongings (vii) = 1 a 0 because a v = 0 = 0 . by part (i) of Theorem 4.1

Theorem iv.ane is valid even for unusual vector spaces, such as those in Examples 7 and 8. For example, part (4) of the theorem claims that, in general, a v = 0 implies a = 0 or v = 0. This statement tin can quickly be verified for the vector space Five = R + with operations ⊕ and ⊙ from Case seven. In this instance, av = v a , and the aught vector 0 is the real number ane. Then, part (four) is equivalent hither to the true statement that five a = 1 implies a = 0 or v = i.

Applying parts (ii) and (3) of Theorem 4.1 to an unusual vector infinite V gives a quick fashion of finding the zero vector 0 of V and the additive changed −v for any vector v in V . For instance, in Example 8, we take V = R two with scalar multiplication a ⊙ [x,y] = [ax + a − 1,ay − twoa + 2]. To find the nil vector 0 in 5 , nosotros only multiply the scalar 0 past any general vector [x,y] in V .

0 = 0 [ x , y ] = [ 0 ten + 0 1 , 0 y 2 ( 0 ) + 2 ] = [ 1 , 2 ] .

Similarly, if [ x , y ] V , and then − 1 ⊙ [x,y] gives the additive inverse of [10,y].

[ x , y ] = 1 [ ten , y ] = i 10 + ( i ) 1 , 1 y ii ( 1 ) + 2 = x ii , y + 4 .

Read full chapter

URL:

https://www.sciencedirect.com/science/commodity/pii/B9780128008539000049