Multiplication of Matrices - Rules and Methods

#Algebra
TL;DR
The multiplication of matrices is defined only when the number of columns in the first matrix equals the number of rows in the second — and the result is computed row-by-column using dot products. This article covers the dimension rule, the step-by-step method, the four properties, worked examples in $2 \times 2$ and $3 \times 3$ cases and the most common mistakes.
AS
Ashra SiddiquiLast updated on May 13, 202611 min read

What Is Matrix Multiplication?

The multiplication of matrices is an operation that combines two matrices $A$ and $B$ into a new matrix $AB$. It is not element-by-element multiplication. Instead, each entry of the product is computed as the dot product of a row from $A$ and a column from $B$.

Two matrices $A$ and $B$ can be multiplied in the order $AB$ only when the number of columns in $A$ equals the number of rows in $B$. If $A$ has dimensions $m \times n$ and $B$ has dimensions $n \times p$, then the product $AB$ exists and has dimensions $m \times p$.

The slogan: inner dimensions must match; outer dimensions give the result.

$$\underbrace{A}{m \times n} ;\times; \underbrace{B}{n \times p} ;=; \underbrace{AB}_{m \times p}$$

If the inner dimensions do not match, the product is undefined.

The Method — Row by Column

Each entry of the product matrix is computed by:

  1. Taking a row from $A$

  2. Taking a column from $B$

  3. Multiplying corresponding entries

  4. Adding the products

This is exactly the dot product of the row and the column.

A First Worked Example — $2 \times 2$ times $2 \times 2$

Let $A = \begin{pmatrix} 1 & 2 \ 3 & 4 \end{pmatrix}$ and $B = \begin{pmatrix} 5 & 6 \ 7 & 8 \end{pmatrix}$. Compute $AB$.

Both matrices are $2 \times 2$, so the inner dimensions match ($2 = 2$) and the result $AB$ is $2 \times 2$.

To compute the $(1, 1)$ entry of $AB$ — the entry in row 1, column 1 — take row 1 of $A$ and column 1 of $B$:

$$(AB)_{1,1} = (1)(5) + (2)(7) = 5 + 14 = 19$$

To compute $(1, 2)$ — row 1 of $A$ with column 2 of $B$:

$$(AB)_{1,2} = (1)(6) + (2)(8) = 6 + 16 = 22$$

To compute $(2, 1)$ — row 2 of $A$ with column 1 of $B$:

$$(AB)_{2,1} = (3)(5) + (4)(7) = 15 + 28 = 43$$

To compute $(2, 2)$ — row 2 of $A$ with column 2 of $B$:

$$(AB)_{2,2} = (3)(6) + (4)(8) = 24 + 32 = 56$$

Putting them together:

$$AB = \begin{pmatrix} 19 & 22 \ 43 & 56 \end{pmatrix}$$

A Second Example — Non-square Dimensions

Let $A$ be $2 \times 3$ and $B$ be $3 \times 2$:

$$A = \begin{pmatrix} 1 & 2 & 3 \ 4 & 5 & 6 \end{pmatrix}, \quad B = \begin{pmatrix} 7 & 8 \ 9 & 10 \ 11 & 12 \end{pmatrix}$$

Inner dimensions match ($3 = 3$). Product $AB$ is $2 \times 2$.

$$(AB){1,1} = (1)(7) + (2)(9) + (3)(11) = 7 + 18 + 33 = 58$$ $$(AB){1,2} = (1)(8) + (2)(10) + (3)(12) = 8 + 20 + 36 = 64$$ $$(AB){2,1} = (4)(7) + (5)(9) + (6)(11) = 28 + 45 + 66 = 139$$ $$(AB){2,2} = (4)(8) + (5)(10) + (6)(12) = 32 + 50 + 72 = 154$$

$$AB = \begin{pmatrix} 58 & 64 \ 139 & 154 \end{pmatrix}$$

What Is Scalar Multiplication of a Matrix?

Scalar multiplication is the simpler cousin of matrix multiplication. A scalar is just an ordinary number (not a matrix). Scalar multiplication multiplies every entry of a matrix by that number.

$$kA = k \begin{pmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \end{pmatrix} = \begin{pmatrix} k a_{11} & k a_{12} \ k a_{21} & k a_{22} \end{pmatrix}$$

Worked example. Multiply $A = \begin{pmatrix} 2 & -1 \ 0 & 3 \end{pmatrix}$ by the scalar $4$:

$$4A = \begin{pmatrix} 8 & -4 \ 0 & 12 \end{pmatrix}$$

Properties of scalar multiplication. Let $k$ and $\ell$ be scalars, $A$ and $B$ be matrices of the same size:

  • $k(A + B) = kA + kB$ (distributes over matrix addition)

  • $(k + \ell)A = kA + \ell A$ (distributes over scalar addition)

  • $k(\ell A) = (k\ell)A$ (associative)

  • $1 \cdot A = A$ (1 is the scalar identity)

  • $0 \cdot A = O$ (the zero matrix)

Scalar vs matrix multiplication — the distinction students miss.

Operation

What you multiply

Dimensions need to match?

Commutative?

Scalar multiplication $kA$

Number × matrix

No — works on any matrix

Yes: $kA = Ak$

Matrix multiplication $AB$

Matrix × matrix

Yes — inner dimensions must match

No — $AB \neq BA$ generally

Scalar multiplication is a one-line operation; matrix multiplication is the row-by-column dot product. Two very different operations that share the word "multiplication."

The Four Key Properties of Matrix Multiplication

Matrix multiplication has four properties that govern how it behaves. Three of them feel familiar from ordinary arithmetic; one is dramatically different.

Property 1: Non-Commutativity ($AB \neq BA$ in general)

Unlike ordinary number multiplication where $3 \times 5 = 5 \times 3$, matrix multiplication is not commutative. In general, $AB \neq BA$ — even when both products are defined.

Take $A = \begin{pmatrix} 1 & 2 \ 3 & 4 \end{pmatrix}$ and $B = \begin{pmatrix} 5 & 6 \ 7 & 8 \end{pmatrix}$ from above. We computed:

$$AB = \begin{pmatrix} 19 & 22 \ 43 & 56 \end{pmatrix}$$

Now compute $BA$:

$$BA = \begin{pmatrix} 5 & 6 \ 7 & 8 \end{pmatrix} \begin{pmatrix} 1 & 2 \ 3 & 4 \end{pmatrix} = \begin{pmatrix} 23 & 34 \ 31 & 46 \end{pmatrix}$$

These are different matrices. Order matters in matrix multiplication.

Property 2: Associativity ($(AB)C = A(BC)$)

Matrix multiplication is associative — you can group multiplications either way and get the same result:

$$(AB)C = A(BC)$$

This is what makes long chains of matrix products well-defined.

Property 3: Distributivity over Addition

Matrix multiplication distributes over addition from both sides:

$$A(B + C) = AB + AC, \qquad (A + B)C = AC + BC$$

Property 4: Identity Matrix

There is a matrix called the identity matrix $I$ (or $I_n$ for the $n \times n$ version), which is the matrix-multiplication equivalent of the number 1:

$$I_2 = \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix}, \quad I_3 = \begin{pmatrix} 1 & 0 & 0 \ 0 & 1 & 0 \ 0 & 0 & 1 \end{pmatrix}$$

For any $n \times n$ matrix $A$:

$$AI = IA = A$$

The identity matrix is the only matrix that commutes with every matrix of the same size.

Where Matrix Multiplication Appears in the Real World

  • 3D graphics — rotations and projections. Every 3D rotation in a video game or animated film is a $3 \times 3$ matrix multiplied with a position vector. A character moving across the screen involves thousands of these multiplications per frame.

  • Neural networks. The forward pass of a neural network is a series of matrix multiplications — each layer's weights matrix multiplied with the previous layer's output vector, followed by a non-linearity. Training the network adjusts the entries of the weight matrices.

  • Solving systems of equations. $Ax = b$ where $A$ is a coefficient matrix, $x$ is the vector of unknowns, and $b$ is the vector of constants. Solving involves matrix operations including multiplication.

  • Markov chains. A transition matrix multiplied repeatedly with itself predicts long-term behaviour of probabilistic systems — used in PageRank, weather forecasting, and queueing theory.

  • Quantum mechanics. Observable physical quantities are represented as matrices; measurement outcomes come from matrix multiplications on state vectors.

At Bhanzu, our trainers tie matrix multiplication back to one of these real situations — usually rotations in 3D graphics — early in the teaching. Without that anchor, the row-by-column method can feel arbitrary; with it, the why of every entry becomes visible.

A Worked Example — Wrong Path First

Compute $AB$ where $A = \begin{pmatrix} 2 & 1 \ 3 & 4 \end{pmatrix}$ and $B = \begin{pmatrix} 5 & 6 \ 7 & 8 \end{pmatrix}$.

The intuitive (wrong) approach. A student new to matrices multiplies element by element, treating matrix multiplication like adding matrices (which is element-wise).

$$AB \stackrel{?}{=} \begin{pmatrix} (2)(5) & (1)(6) \ (3)(7) & (4)(8) \end{pmatrix} = \begin{pmatrix} 10 & 6 \ 21 & 32 \end{pmatrix}$$

Why it fails. Element-by-element multiplication is called the Hadamard product — it's a real operation, but it's not matrix multiplication. Standard matrix multiplication uses the dot-product method (row × column), which produces a different result and is the one used in every application from graphics to neural networks.

The correct method — row by column.

$$(AB){1,1} = (2)(5) + (1)(7) = 10 + 7 = 17$$ $$(AB){1,2} = (2)(6) + (1)(8) = 12 + 8 = 20$$ $$(AB){2,1} = (3)(5) + (4)(7) = 15 + 28 = 43$$ $$(AB){2,2} = (3)(6) + (4)(8) = 18 + 32 = 50$$

$$AB = \begin{pmatrix} 17 & 20 \ 43 & 50 \end{pmatrix}$$

The element-wise answer and the correct answer share no entries. This is one of those mistakes that's hard to catch by inspection — the wrong answer looks like a matrix and follows from a coherent (but incorrect) rule.

The memorizer who learned "to multiply matrices, multiply corresponding entries" — perhaps generalising from how matrix addition works — hits this exact confusion. The fix is to learn matrix multiplication as "composition of transformations" rather than as element-wise arithmetic.

Common Mistakes with Matrix Multiplication

Mistake 1: Element-by-element multiplication instead of row-by-column

Where it slips in: First exposure to matrix multiplication, especially after learning matrix addition (which is element-wise).

Don't do this: $\begin{pmatrix} 1 & 2 \ 3 & 4 \end{pmatrix} \begin{pmatrix} 5 & 6 \ 7 & 8 \end{pmatrix} = \begin{pmatrix} 5 & 12 \ 21 & 32 \end{pmatrix}$ — element-wise.

The correct way: Use the dot-product method: each entry of the product is a row of the first times a column of the second. The result is $\begin{pmatrix} 19 & 22 \ 43 & 56 \end{pmatrix}$. The memorizer who treats matrix multiplication as a simple generalisation of addition hits this.

Mistake 2: Multiplying when dimensions don't match

Where it slips in: Forgetting to check the inner-dimensions rule before computing.

Don't do this: Attempting to compute $AB$ when $A$ is $2 \times 3$ and $B$ is $2 \times 3$. The inner dimensions are $3$ and $2$ — they don't match, so $AB$ is undefined.

The correct way: Always check inner dimensions first. If $A$ is $m \times n$ and $B$ is $p \times q$, then $AB$ exists only when $n = p$, and the result is $m \times q$. The rusher who starts computing before checking dimensions wastes time and produces nonsense answers.

Mistake 3: Assuming $AB = BA$

Where it slips in: Anywhere the student wants to "rearrange" matrices the way they'd rearrange numbers. $3 \cdot 5 = 5 \cdot 3$ — but for matrices, generally $AB \neq BA$.

Don't do this: In a proof, replacing $AB$ with $BA$ as if they were equal.

The correct way: Treat matrix multiplication as non-commutative. Order is information — $T_2 T_1$ means "first apply $T_1$, then apply $T_2$", and the order changes the result. The second-guesser who pauses to check whether $A$ and $B$ actually commute is asking the right question.

The real-world version of the mistake. In Heisenberg's 1925 formulation of quantum mechanics, the non-commutativity of certain matrices ($XP - PX \neq 0$) is exactly the Heisenberg uncertainty principle — the deep physical fact that you can't simultaneously know both the position and momentum of a particle with perfect precision. Treating $XP$ as if it equals $PX$ would erase the physics. The order in matrix multiplication is not bookkeeping — it's information.

The Mathematicians Who Shaped Matrix Multiplication

James Joseph Sylvester (1814–1897, England) — Coined the term matrix in 1850, borrowing the Latin word for "womb" because a matrix produces ("gives birth to") many determinants when you take submatrices. Sylvester and Cayley were lifelong friends and collaborators; together they founded the modern theory of invariants.

Arthur Cayley (1821–1895, England) — In his 1857 paper A Memoir on the Theory of Matrices, defined matrix multiplication as the composition of linear transformations — establishing exactly the row-by-column rule used today, including its non-commutativity.

Maxime Bôcher (1867–1918, USA) — Generalised Cayley's definition to arbitrary-sized matrices in his 1907 textbook on linear algebra, making matrix multiplication a fully general operation rather than a special case for small dimensions.

Three mathematicians — across two countries and 70 years — built the operation that today powers every 3D graphics engine, neural network, and search algorithm.

A Practical Next Step

Try these three problems before moving on to matrix inverses and determinants.

  1. Compute $AB$ where $A = \begin{pmatrix} 1 & 0 \ 2 & 3 \end{pmatrix}$ and $B = \begin{pmatrix} 4 & 5 \ 6 & 7 \end{pmatrix}$.

  2. For the same $A$ and $B$, compute $BA$. Notice that $AB \neq BA$.

  3. Let $A$ be $3 \times 2$ and $B$ be $3 \times 2$. Does $AB$ exist? Does $BA$ exist? (Hint: check inner dimensions both ways.)

If problem 3 confused you, that's the dimension-rule test at work — check the inner numbers. Want a live Bhanzu trainer to walk through more matrix problems? Book a free demo class — online globally.

Was this article helpful?

Your feedback helps us write better content

Frequently Asked Questions

What is matrix multiplication?
Matrix multiplication is an operation that combines two matrices $A$ and $B$ into a third matrix $AB$, where each entry of the product is the dot product of a row of $A$ with a column of $B$. It's defined only when the number of columns in $A$ equals the number of rows in $B$.
When can two matrices be multiplied?
When the number of columns in the first matrix equals the number of rows in the second. If $A$ is $m \times n$ and $B$ is $n \times p$, then $AB$ exists and has dimensions $m \times p$. If those inner dimensions don't match, the product is undefined.
Is matrix multiplication commutative?
No. In general, $AB \neq BA$ — even when both products are defined. This is one of the deepest differences between matrix multiplication and ordinary number multiplication. The order matters because matrix multiplication corresponds to composition of transformations, and the order of transformations changes the result.
What is the identity matrix?
The identity matrix $I_n$ is the $n \times n$ matrix with $1$'s on the diagonal and $0$'s everywhere else. For any $n \times n$ matrix $A$, $AI = IA = A$ — analogous to multiplying a number by $1$.
What is the difference between matrix multiplication and the Hadamard product?
Matrix multiplication uses the row-by-column dot product. The Hadamard product is element-by-element: $A \circ B$ has entries $A_{ij} B_{ij}$. Both are real operations used in different contexts. Standard "matrix multiplication" (and the one used in graphics, neural networks, and most of physics) is the row-by-column version.
Who invented matrix multiplication?
Arthur Cayley defined matrix multiplication in his 1857 paper, building on James Joseph Sylvester's 1850 coining of the term matrix. Maxime Bôcher generalised the operation to arbitrary dimensions in 1907.
Where is matrix multiplication used today?
3D graphics (every rotation, scaling, and projection), neural networks (each forward pass is a sequence of matrix multiplications), solving systems of linear equations, Markov chains, Google's PageRank algorithm, and quantum mechanics. It's one of the most-used operations in modern computation.
✍️ Written By
Ashra Siddiqui
Ashra Siddiqui
Mathematics - Subject Matter Expert
I am a Subject Matter Expert at Bhanzu, working with the LX team and bringing over 10 years of experience in teaching mathematics across primary and middle school levels. I specialize in Algebra, Geometry, and Arithmetic, with a strong focus on simplifying complex concepts through interactive, concept-based learning. My teaching approach is student-centered and differentiated, ensuring every learner builds confidence, strengthens problem-solving skills, and develops a deeper understanding of mathematics. I hold a Master’s degree in Physics, a Bachelor’s degree in Mathematics, and a central-level teaching certification. I have also trained students for competitive exams and Olympiads. Through my work and content, I aim to make math meaningful, relatable, and enjoyable for both students and parents.
Related Articles
Book a FREE Demo ClassBook Now →