I mean, there are 8-bit, 16-bit, 32-bit, 64-bit processor,
8-bit, 16-bit graphics and music.
That's weird.
Does 8/16-bit graphics and music are only graphics and music runned on 8/16-bit processor?
If so, why aren't there 32/64-bit graphics/music
Also, today, what is the actual number of bit used by today's system?
I've heard the 6th gen console wasn'T 128-bit but 64-bit extended to 128-bit. What does that mean?
Thanks

8- and 16-bit music is a genre. It is based off of the typical music of consoles with 8- and 16-bit processors. With extreme optimization, support for larger cartridges, and a better sound chip, the NES could play any modern song at full quality. In this scenario, the processor would stay the same (8-bit). In fact, the Genesis, a 16-bit console, can already play Bad Apple with voices at nearly full quality with barely compressed graphics, just because it has a YM2612 sound chip.
8- and 16-bit graphics are a similar case. It is very intensive on old processors to make graphics, and many games are already pushing the envelope. Yet, it is not a hard limitation.
On that note, there are 32-bit and 64-bit graphics (in a similar vein to 8- and 16-bit graphics and music; i.e., not a hard limitation). The 32X, 3DO, and PS1 have 32-bit style graphics. The N64 has 64-bit style graphics. In fact, Super Mario RPG is technically 24-bit due to a special chip inside the cartridge, so you could say it has 24-bit graphics.
So, to answer your first question, no, 8-bit and 16-bit graphics and music do not only not really exist, but they can be played/displayed on any processor (with enough work).

Today, computers are either 32-bit or 64-bit because they have either 32-bit or 64-bit processors. I'm running Windows 10 on a 64-bit system, but does it look the slightest bit like the N64? No. It doesn't. To narrow it down further, unless your PC is absolute crap and really old, its 64-bit.
"64-bit extended to 128-bit" means that, internally, the console still uses 64-bit long bytes. It is just programmed to use 2 bytes for all calculations, thus sort-of making it 128-bit.

And finally, to answer the title, a bit is a 0 or a 1. This is the system all computers use internally. It is a system with 2 digits, making it base 2 (binary). We use base 10 (decimal) in our everyday lives (10 digits, 0-9), but it is faster to process in base 2. 8-bit processors use 8-bit bytes, 16-bit processors use 16-bit bytes, and so on.
An 8-bit byte is a byte that's 8 bits long. It's as simple as that. For example, <00010100> is an 8-bit byte, and equals 20 in decimal. <10101001 11101000> is a 16-bit byte, and equals 43,496 in decimal. A 64-bit byte looks like <01001101 10100000 11110000 10010110 10100001 10101010 01010101 0000000>. The maximum value you can make with a single 8-bit byte is 255 (256 values). For an 16-bit byte, it's 65,535 (65,536 values). For a 64-bit byte, it's over 4 million.

Anyway, you're welcome. All of this information was off of the top of my head, by the way

CPU speed hasn't been increased in the last 5 years for multiple reasons.
We've sort of reached a peak for now, and a 128-bit processor wouldn't really be noticeably better right now.
So no, not any time soon. If ever.

Well, I've heard that cpu speed hasnt increased lately because we've reached like the physical limit
Like it's impossible physically to go anywhere faster with today's technology, isn'6 that truew?
I also heard about a light pulse cpu that can reach THz, is that true?