For a long time now, I was thinking to myself: “I know how to develop softwares, And I also have a little bit of practice with drivers and compiling operation systems (I even tried to write my own code to a boot sector and run it), But I never really understood how CPU works”.

So I’ve decided to take matter into hands and try to build a CPU. At first I thought about creating it all from scratch, But quick enough I discovered that the amount of transistors I will have to solder is enormous (I will soon enough show that even a simple NAND gate requires 4 transistors). So I’ve decided to build it on FPGA which will make life easier, and instead of creating my own architecture, I’ve decided to try and mimic the 6502 microprocessor, which is the CPU used by NES. If everything will go according to plan, I will be able to play Super-Mario on my own CPU.

The first post will have nothing to do with FPGA, as logic gates are provided out-of-the-box when using FPGA, so this part will have nothing to do with my implementation of the 6502 microprocessor. But according to my belief this part is the most important one, because the logic gates are the basic building blocks of any electronic device, in particular a CPU.

Let be an associative algebra with the product (For example the algebra of matrices, The the space with the bracket is a Lie algebra denoted by or by

Moreover, In Analytical mechanics, for the physicists readers, you’ve met the Poisson Brackets which maintains the Jacobi identity. And in quantum mechanics, physicist always talking about the commutator of two operators (For example the Hamiltonian and the momentum). And theorems from Lie algebra can easily be applied to these subject, and it’s another way of showing how we can learn facts about the universe simply from playing with math.

I hope I will have time to add a little bit more posts about the subject (such as main theorems and such).

It’s been a long time since my last post about mathematics, which is kind of a shame, because I wanted math to be one of the main subjects of this blog.
So, I decided to start and write a little bit more about things I loved in mathematics (and hopefully I will follow).

In this post, I’m assuming some basic Linear Algebra knowledge. This post is a little boring, but it defines an important algebraic structure that is used in some very beautiful subjects and theorems, so stay tuned.

Algebras and Subalgebras

Definition: An algebra is a vector space over field , endowed with a binary bilinear operation () s.t. :

For example, The polynomials with variables, , with the multiplication as the operation is an associative and commutative algebra. This algebra is usually called the Polynomial algebra.

But please note, NOT all algebras must be commutative! For example, we can look at the vector space of the matrices (), with matrix multiplication at the operation . is an associative, non-commutative algebra.

We can also define a subalgebra:

Definition: A subspace is called a subalgebra if .

For example, the diagonal matrices are a subalgebra of the matrices algebra.

Homomorphism

And just like any other algebraic structure, we can define an homomorphism between two algebras as a linear map between the algebras the preserve the operation, that is:

Defenition: Let be algebras, a linear map is called an homomorphism if:

Ideals

Definition: A subspace of an algebra is a left (resp. right, resp. two-sided) ideal if:

(resp. , resp. ).

It is clear, by the definition, that any ideal is a subalgebra.

Using the ideals, we can look at the quotient space (a space where any vector in I means the zero vector, meaning that two vector that differ by only a vector in I are identical). The quotient space has the canonical algebra structure given by:

which called the quotient algebra .
It’s easy to see that the canonical map that is given by is an algebra homomorphism.

Moreover, if is an algebra homomorphism, the kernel is a two-sided ideal of and the image is a subalgebra of .