]]>I don’t know about you guys but my parents were always yelling at me about sitting too close to the TV. Blah blah blah glasses, blah blah, blind, blah blah go outside and play with your friends. Ma, you know I don’t have any friends, why do you keep bringing it up? Anyway back on topic, is there truly a correct distance to sit from your screen? Well, let’s start with unpacking the claim that sitting too close to the TV is bad for your health. Well as it turns out, this widespread belief may have originated from a legitimate concern over a TV recall in the 1960s, where a certain line of defective televisions actually did emit a significant amount of x-ray radiation. That is bad for your eyes, and everything else. But, given there haven’t been any reports of this problem since then, why are people still concerned about it? Well, computer eye strain is something that you’ve probably experienced after focusing on anything for a long enough period of time. Even books or magazines. This happens because the muscles attached to your eyes tense up to help them focus.

So just like any other muscle, if you tighten them up for long enough they start to ache. And there is some truth to this effect being stronger with a backlit source. If your eyes need to be constantly adjusting between the brightness of your screen and the sharply contrasting darkness of your room, you’ll experience this computer eye strain fatigue more quickly.

There is good news though. You can slow down the effects by blinking more frequently to moisturize your eyes, glancing away from your book or monitor periodically, and by using a moderate brightness that’s not too dissimilar to the brightness of your surroundings. That and computer eye strain is easily relieved and doesn’t cause any long-term problems as far as we know. Just rest your eyes for a while by moving to a dark environment or relaxing them by not focusing on anything too intently. With that said, I still don’t recommend sitting too close to your screen. Not because it’ll hurt you, but because it’ll your viewing experience. The correct distance to sit from your screen is at least far enough where you can’t make out individual pixels that will make the image look blocky and distorted. So if you’re in the market for a new display there’s a simple way that you can figure out what size you should buy. For a 1080p display figure out how far you’ll be sitting from the screen and divide the distance by about 1.6 to get the size of the largest TV or monitor you should purchase. You may even wanna get something slightly smaller to be on the safe side with respects to image quality, but there’s no reason to get something much smaller than the formula recommends. As for 4K displays, the pixels are much smaller, so you can get away with buying a larger screen. Especially as that will make it easier to see details that the higher resolution makes possible. So for 4K your display size should be about the same as your viewing distance, or just slightly smaller. And that’s it, once you’ve done some simple math, purchased the right display, oh and maybe installed some BIAS lighting which we covered here behind the display to improve contrast and reduce computer eye strain even further, you’re all set to sit on your butt and stare at binarymove.com for as long as you want.

]]>https://www.binarymove.com/computer-eye-strain/feed/0783Intel processors warranty explanationhttps://www.binarymove.com/intel-processors-warranty-explanation/
https://www.binarymove.com/intel-processors-warranty-explanation/#respondMon, 18 Dec 2017 16:26:06 +0000https://www.binarymove.com/?p=740Today we will be talking about the Intel processors warranty process. We will consider the following questions: What are the differences between the OEM and box processor warranty? Who should I contact in the event [...]

]]>Today we will be talking about the Intel processors warranty process. We will consider the following questions:

What are the differences between the OEM and box processor warranty?

Who should I contact in the event of an issue?

What information do I need to provide for a warranty exchange?

What questions do we receive about the warranty process?

Where can I find out more on the processor warranty process

Let’s begin by discussing what OEM or tray processors are. OEM stands for Original Equipment Manufacturer. OEM’s are typically large companies that buy Intel Processors in bulk and incorporate them as a system component in the PC. The term tray comes from the trays that the processors are shipped from the factory in. OEM, such as, Hewlett Packard, Acer, Lenovo, Dell and others manufacture complete PC systems that use Intel Processors, then these OEM’s sell these systems and provide complete warranty and technical support services for the

EOM branded systems

systems directly themselves. An Intel boxed or a retail processor is sold in a factory sealed package by Intel and sold to channel partners who typically integrate them and sell systems to end customers. These boxed processors usually come with a processor, thermal solution and a manual (here is an example of one). Most importantly, they come with a three-year warranty. Intel recommends working with your technology provider in the event that a warranty is required since this will usually shorten the time an exchange takes. If this is not possible, you may contact Intel directly. Certain processors are also available in PRC in a localized box. The manual contains installation instructions, warranty terms and conditions, and technical support contact numbers. Retain the processor plastic clamshell as it helps protect the processor if you ever need to ship the processor. Prior to contacting support for an Intel processors warranty exchange, there is certain critical information you will need to have at your fingertips. Using the third generation Intel Core I3-3220 Processor as an example, capture all the information highlighted on the picture.

Information on processors

Typically, Intel boxed processors come with a thermal solution in the box. Information to capture from the label prior to contacting support includes: the Intel part number, processor number, and manufacturer. The Intel provided thermal solution comes with pre-applied thermal interface material.

Other information you will be asked to provide are: date of purchase, proof of purchase, contact information, and issue description. In the event troubleshooting is needed, you may be asked for; motherboard model and BIOS information, memory part number, along with memory speed and voltage, power supply information, operating system and service pack. Some additional questions you may be asked are:

How long has it been working?

Has anything changed on the system recently?

What have you done to troubleshoot this problem?

Have you tried the processor in a working system?

Now let’s look at some of the questions that are often obtained from customers about Intel processors warranty.

Who should I contact if I have an issue with a processor on an OEM system? For technical or warranty questions concerning your OEM system, contact your system manufacturer.

Who should I contact for an Intel boxed processor warranty exchange? Contact your technology provider first since this may expedite the return process. If this is not possible, contact Intel.

Are there any tools that will help me isolate a problem with a processor? You may use the Intel Processor Diagnostic Tool for this purpose provided your system is working.

Is there a post-purchase registration process for Intel boxed processors? No.

Can I extend the normal three-year warranty for Intel boxed processors? No.

Is the fan-heatsink or thermal solution eligible for return under the boxed processor warranty? Yes. The fan-heatsink is covered under the three-year warranty. The agent may still ask for your processor markings to verify the warranty time frame.

If the processor is scratched or otherwise has physical damage, can I receive a new processor under Intel’s warranty process? Physical damage is not eligible for a warranty exchange. Extra care should also be taken when shipping the processor back to Intel as it will be rejected if damaged during the shipping process.

Does Intel provide new or refurbished processors for replacement? Intel currently provides new processors as replacement processors. Check the warranty terms and conditions in the manual for specific details on this subject.

What if I don’t know if my processor is OEM or boxed? How can I find out if my processor is still under warranty? Let’s show you how to search for this answer. If the results of this search are inconclusive, you may contact Intel. Go to www.intel.com. Click support. Click Warranty. Select Warranty Center. When the Intel processors warranty center appears, choose “Product Type”. Enter the FPO number of processor and then click the search button. Note that you have an option to chose technical assistance or to check another product. If the results are inconclusive, you may use a “Contact Us” link that appears.

]]>I love Intel as much as anyone. They make cool products. But, when it comes to confusing Intel CPU naming scheme, I think Core i3, Core i5, Core i7 takes the cake. I mean great question, what is a Core i7 4790K? What the heck does all of this even mean? We’ll get to that but first, a bit of background about why we need product names for processors. Wouldn’t it be simpler to just label them with how many gigahertz they run that and call it a day? Simpler sort of but at times, actually even more confusing. For example, when the Pentium 4 launched, an equivalently clocked Pentium 3 was actually faster because it could do more work with each cycle. As a customer, I would expect the product with the higher number to be the better one and therein lies the problem. Not all megahertz and gigahertz are created equal and rating products that way is about like rating the performance of a car based on what RPM the engine runs at. It’s not actually a real indication of how fast the processor is, but it happened. Now one of AMD’s attempts to move away from this started in the early 2000s with their PR or performance rating naming scheme where their processors were given a four-digit model number that enthusiasts believe was based on the performance AMD felt that they delivered
Intel core i3

compared to an Intel CPU of that clock speed but this fixed nothing. They were still indirectly naming according to clock speed and it wasn’t until Intel introduced the Core series, a line of CPUs that dramatically outperformed their predecessors at much lower clocks that the megahertz war ended because Intel needed to shift their marketing away from frequency. Here’s what we had today. Other than the very bare-bones Pentium SKUs, a Core i3 will be your most basic option with two processing cores and hyper-threading, for better multitasking. It will have a smaller cache. It’ll consume less power and it will generally perform a worse than a Core i5 but it’ll cost less, which leads us to the Core i5. I wish I could say that Intel CPU naming scheme as simple as, “Well, Core i3’s have two cores and Core i5’s have four cores. The number of cores equal N minus one where N is the number after the little I,” but it’s not. Mobile core i5s have two cores and hyper-threading while desktop ones mostly have four cores and no hyper-threading. What they all have in common is improved onboard graphics and turbo boost, for temporary performance enhancements when your system needs a little bit more CPU power you need… Core i7’s. All Core i7’s have hyper-threading for heavy workloads. A Core i7 can have anywhere from two processing cores in an ultrabook all the way up to eight in a workstation. It might support anywhere from two sticks of memory all the way to eight and it can have a TDP anywhere from around 10 watts all the way to 130 watts, so there’s a ton of variety here. That’s for a reason. Core i7s tend to have more cache, faster turbo boost and better onboard graphics than the lower tier processors and I guess other than that, the best summary I can give is this. A Core i7 represents the best thing Intel could build for a given use case with the biggest drawback being the higher price tag. When you

features of Core i7 CPU

boil it down, that’s all the I whatever numbers represent, good, better, best within a given segment. Beyond that on their own, they’re pretty much meaningless. The numbers and letters afterwards sort of means something if you use the guide but the safest way to shop is to dig around in arc and look at the features, core counts and clock speeds of the CPUs you’re comparing to figure out how they stack up with. The good news being that as long as you compare within one brand and within the same product generation, those metrics will actually mean something.

]]>https://www.binarymove.com/intel-cpu-naming-scheme-core-i3-core-i5-core-i7/feed/0717M.2 new fast SSD!!!https://www.binarymove.com/m-2-new-fast-ssd/
https://www.binarymove.com/m-2-new-fast-ssd/#respondSun, 19 Nov 2017 18:43:45 +0000https://www.binarymove.com/?p=692For decades now the clear trend in storage technology has been to make drives as small as possible and as fast as possible. We’ve gone from having 4 megabyte drives that are twice as big [...]

]]>For decades now the clear trend in storage technology has been to make drives as small as possible and as fast as possible. We’ve gone from having 4 megabyte drives that are twice as big as your average refrigerator in the 1950s to multi-terabyte SSDs that are not much bigger than a credit card, yet can still move gigabits of data per second. But even as popular as those speedy two and a half inch SSDs are, it became clear that with advancements in flash storage technology, we could shrink our digital repositories down even smaller without sacrificing speed. Say hello, my friends, to M.2 new fast SSD or rather, a new interface for both SSDs and expansion cards that promises higher speeds and less bulk. You see, although SATA III SSDs offer huge performance gains compared to
SATA III

traditional, mechanical hard drives, that SATA interface, that cable on the back itself, has a pretty serious bottleneck, topping out at about 550 – 600 megabytes per second. While that may seem very fast, and indeed it is, flash storage has advanced very rapidly in recent years and is now capable of handling data much more quickly than SATA will allow. Although SATA Express was designed to alleviate this bottleneck, it is pretty difficult to find SATA Express drives on the market, even though motherboards supporting it are out there. Fortunately, though, M.2 provides the same speed benefits as SATA Express with the extra advantage of being much, much smaller and more versatile. Many M.2 drives actually utilize your computer’s PCI Express bus. With the fastest M.2 drives currently on the market using a PCI Express 3.0 X4 connection, translating to a theoretical maximum of 4 Gb per second. Although you probably won’t see real world speeds quite that high, especially not right now, current M.2 PCI drives will still be much faster than a regular SATA SSD and should continue to get faster in the future. M.2 is also useful for things other than storage devices. You can get Wi-Fi and Bluetooth cards that will fit into your motherboard’s M.2 slot and take advantage of PCI Express speeds without having a large card plugged in or a dongle hanging out of your case. Even if you’re just using M.2 for storage, you still get the additional benefit of not having to mess around with any power or signal cables. Slot in the card, put in a screw, and you are good to go. But, before you rush out to buy an M.2 new fast SSD or wireless adapter or whatever, remember this: not all M.2 card’s and slots are the same. Faster M.2 drives will utilize the newest version of PCI Express. That is, 3.0, as of right now. As well as the new NVME standard for interacting with the rest of the system, so be sure to check for these things before you buy, especially as some M.2 drives use PCIe 2.0, the older AHCI protocol or even the existing SATA bus. So they’re basically just an M SATA card that fits in an M.2 slot. So, in the case of the last one, you won’t see any performance benefit at all. You also have to pay attention to the physical layout of the card. Unlike SATA, M.2, even though it only has one name, uses several different connector types. Most M.2 new fast SSD slots on current motherboards use either connectors B or M for SSDs and A or E for wireless adapters. Now, some cards are

B and M connectors

actually compatible with multiple motherboard connectors, so just make sure that whatever you’re buying, you are checking before you pull the trigger. Then, finally, size matters with M.2. Every M.2 card, regardless of its connector type, will have a number indicating its dimensions, such as 2242, which means that it’s 22 millimeters by 42 millimeters. Just like how some cases will only fit graphics cards up to a certain length, some motherboards and notebooks aren’t compatible with longer M.2 devices. So I guess the final word is this: although there may be more to remember with M.2 new fast SSD, the extra speed and flexibility is probably worth having to pay a little bit of attention. That is to say unless the thought of leaving all 10 of those drive cages in your unnecessarily large gaming tower unused makes you feel sad.

]]>Imagine… It’s around 2:30 p.m. You’re stuck at work, and you’re desperate to get away for a coffee because your boss is breathing down your neck for, I don’t know, a line graph or something. But suddenly, you glance down at your keyboard, and you get an idea. What would happen if I actually hit the Pause Break key?

So you spring into action, mashing your index finger down on it, and as it bottoms out, your boss freezes, and a delightful coffee appears right in front of you. Well, no no… So what is the Pause Break button actually there for then? I mean, unlike the Scroll Lock key, it doesn’t even toggle that little light on your keyboard.
Well, the origins of the Pause Break key actually go all the way back to the 1800s when zesty memes were sent via telegraph rather than the internet. Back then, if two people were communicating via teleprinter, one person could press Break to break the circuit, which effectively interrupted the sender and let them know that the other end needed to transmit something.

This usually happened when one new source that was sharing a telegraph line with others had some breaking news to report. But even though teleprinters have long been obsolete, the Pause Break key stuck around on computer keyboards as a means to stop or terminate a running program.

Now, this functionality was much more common back before graphical operating systems were a thing. But for fun, you can still use Control+Break to terminate a program in DOS or in other text-only environments. But on a more practical note, you might also still see a system administrator using the Pause Break key if a PC is connected to a mainframe and if he has to interrupt the boot process to recover a password.

But that doesn’t have much applicability to you folks at home, does it? So a far more common use for you would be to pause that flow of text you might see on own computer at boot time, like if you would like to see BIOS messages that might help you understand or troubleshoot any issues you’re having. Now, it doesn’t work on every PC, but give it a shot the next time you fire up your system if you’re curious.
The Pause button will also stop any huge avalanche of

command prompt

text scrolling down your screen if you’re using command prompt. And even if you can’t be bothered to care about anything that has to do with old-school command-line computing, there are still two modern applications for Pause Break.

system property screen

Holding down the Window key and pressing Pause Break will bring up your system property screen. And some games will let you press Pause to pause your gameplay. Makes sense, right?
It does make me wonder why that isn’t the standard, but even if your favorite game doesn’t use the Pause key for

this purpose, you can always use hotkey software to make this extraneous button do whatever you’d like, be it in games or in Adobe Premiere.

]]>https://www.binarymove.com/what-is-pause-break-key-for/feed/0674Moore’s Law Is Ending… So, Now What?https://www.binarymove.com/moores-law-is-ending-so-now-what/
https://www.binarymove.com/moores-law-is-ending-so-now-what/#respondMon, 06 Nov 2017 08:54:20 +0000https://www.binarymove.com/?p=652Remember when cell phones looked like this? You could call, text, maybe play snake on it … and it had about 6 megabytes of memory, which was a small miracle at the time. Then, phones [...]

]]>Remember when cell phones looked like this? You could call, text, maybe play snake on it … and it had about 6 megabytes of memory, which was a small miracle at the time. Then, phones got faster
Gordon Moore

and around every two years, you probably upgraded your phone from 8 gigs to 16 to 32 and so on and so forth. This incremental technological progress we’ve all been participating in for years hinges on one key trend, called Moore’s Law. Co-founder of Intel, Gordon Moore made a prediction in 1965 that integrated circuits, or chips, were the path to cheaper electronics. Moore’s Law states that the number of transistors, the tiny switches that control the flow of an electrical current that can fit in an integrated circuit, will double every two years, while the cost will halve. Chip power goes up as cost goes down. That exponential growth has brought massive advances in computing power… hence tiny computers in our pockets! A single chip today can contain billions of transistors, and each transistor is about 14 nanometres across! That’s smaller than most human viruses! Now, Moore’s Law isn’t a law of physics, it’s just a good hunch that’s driven companies to make better chips. But experts are claiming that this trend is slowing down. Granddaddy chip maker Intel recently disclosed that it’s becoming more difficult to roll out smaller transistors in a two year timeframe while also being affordable. So, to power the next wave of electronics, there are a few promising options in the works. One is quantum computing. Another currently in the lab stage is neuromorphic computing, which are computer chips that are modeled after our own brains! They’re basically capable of learning and remembering all at the same time at an incredibly fast clip. Let’s break that down and start with the human brain. So, your brain has billions of neurons, each of which forms synapses or connections with other neurons. Synaptic activity relies on ion channels, which control the flow of charged atoms like sodium and calcium that make your brain function and process properly. So, a neuromorphic chip

Neuromorphic chip and ion channels

copies that model by relying on a densely connected web of transistors that mimic the activity of ion channels. Each chip has a network of cores, with inputs and outputs that are wired to additional cores, which all operate in conjunction with each other. Because of this connectivity, neuromorphic chips are able to integrate memory, computation, and communication all together. These chips are an entirely new computational design. Standard chips today are built based on von Neumann architecture… where the processor and memory are separate and the data moves between them. A central processing unit runs commands that are fetched from memory to execute tasks. This is what’s made computers very good at computing, but not as efficiently as they could be. Neuromorphic chips however completely change that model by having both storage and processing connected within these “neurons” that are all communicating and learning together. The hope is that these neuromorphic chips could transform computers from general purpose calculators into machines that can learn from experience and make decisions. We’d leap to a future where computers wouldn’t just be able to crunch data at breakneck speeds but could do that AND process sensory data in real time. Some future applications of neuromorphic chips might include combat robots that could decide how to act in the field, drones that could detect changes in the environment, and your car taking you to a drive through for ice cream after being dumped… basically these chips could power our future robot overlords. We don’t have machines with sophisticated, brain-like chips yet but they’re on the horizon. So get ready for a whole new meaning for the term “brain power.”

]]>https://www.binarymove.com/moores-law-is-ending-so-now-what/feed/0652What is global network (Internet)?https://www.binarymove.com/what-is-global-network-internet/
https://www.binarymove.com/what-is-global-network-internet/#respondThu, 02 Nov 2017 10:17:06 +0000https://www.binarymove.com/?p=639What is the global network (Internet)? Most people don’t have any idea where the internet came from and doesn’t matter, they don’t need to. It’s sort of like asking who invented the ballpoint pen, or [...]

]]>What is the global network (Internet)? Most people don’t have any idea where the internet came from and doesn’t matter, they don’t need to. It’s sort of like asking who invented the ballpoint pen, or the
Inventors of internet Vint Cerf and Bob Kahn

flush toilet or the zipper. These are all things we just use every day we don’t even think about the fact that one day somebody invented them. So the internet is just like that. Many, many years ago in the early 1970s, Bob Kahn and Vint Cerf began working on the design of what we now call the global network (Internet). It was a result of another experiment called the ARPANET which stood for Advanced Research Project Agency Network. It was a Defense Department research project. Paul Baran was trying to figure out how to build a communication system (global network) that might actually survive a nuclear attack. So he had this idea of breaking messages up into blocks and sending them as fast as possible in every possible direction through the mesh network. So we they what eventually became a nationwide experimental packet network, and it worked. Is anybody in charge of the internet? The honest answer is well nobody and maybe another answer is everybody.

internet

The real answer is that the global network (internet) is made up of an incredibly large number of independently operated networks. What’s interesting about the system is that it’s fully distributed. There’s no central control that is deciding how packets are routed or where pieces of the network are built or even who interconnects with whom. These are all business decisions that are made independently by the operators. They are all motivated to assure that there is end-to-end connectivity of every part of the network because the utility of the net is that any device can communicate with any other device; just like you want to be able to make phone calls to any other telephone in the world. There’s nothing like this that has ever been built before. The idea that what you know might be useful to somebody else or vice versa is a very powerful motivator for sharing information. By the way that’s how science gets down, people share information. So this is an opportunity for people to think up new applications, maybe program them as apps on a mobile phone, maybe become part of the continued growth of the infrastructure of the network to bring it to people who don’t have access to it yet; or just make use of it on a day-to-day basis. You can’t escape from contact with the internet so why not get to know it and use it.

]]>https://www.binarymove.com/what-is-global-network-internet/feed/0639First programming languagehttps://www.binarymove.com/first-programming-language/
https://www.binarymove.com/first-programming-language/#respondThu, 26 Oct 2017 09:03:12 +0000https://www.binarymove.com/?p=617Today, I would like to answer the question of “What programming language should I learn first”. Now, there are a ton of programming languages for you to choose from. You have Python, you have C, [...]

]]>Today, I would like to answer the question of “What programming language should I learn first”. Now, there are a ton of programming languages for you to choose from. You have Python, you have C, you have C++, you have SQL, you have C#, you have Java, Swift. And the list goes on forever. However, my point is not to overwhelm you with the number of languages that exist, but it’s important for you to understand, first of all, why the languages might exist and, second of all, which language should you start off with first. So if today a human being walked up to me who did not know the concept of language? Let’s say he knew no languages whatsoever. What first programming language should I recommended to him. And I’d be like hmm, definitely not Japanese. Right? That’s one language I probably wouldn’t want to start him off with (as a first programming language), simply because it’s a little complicated, a little bit more complicated than say Spanish or English, which are, on the language difficulty spectrum on the level one.
simple and powerful

But at the same time, if you think about this human language, like English and Spanish, you can’t say that just because they’re simpler doesn’t mean that they’re very powerful. They’re also spoken pretty worldwide and a lot of people understand them. So, even though they’re simple, they’re still pretty powerful. Now, let’s transition that over to computer programming. From what first programming language should I start? Well, to answer that question, I would say start off with the simplest language, right? Definitely, do not get started off with a really complicated language because here’s the fact. If you do not know the concept of language at all and I’m not saying you’re a baby, where your neuroplasticity is so high that you just pick up anything, let’s say you’re an adult and you just don’t know the concept of languages. You wanna definitely start off with the easiest language because once you learn the concept of language as a human being, it’s a way to express your emotions and ideas to other people. Once you learn that at its very core, you can transfer over to another language, like say, Japanese, and go how do you say banana in Japanese, how do you say this in Japanese. At least you can look it up. You have the tools to actually look that up and learn something in reference to another thing. So, in terms of computer programming, if you know the simplest language, then when you go over to another complicated language, you can just go, how do I do this thing that I do in this other language that I know in this language that I’m trying to learn. Okay? So, your research process becomes a lot simpler. And the ideas are all the same, you guys. For example, the emotions that American people or Italian people or Spanish people express, people in Japan or China express the same emotions, right? The tool that they use, the language syntax, the specificity of each language might differ, but at its very core, we’re still trying to express emotions of sadness, happiness, cherish, hopefulness, whatever, right? Computer programming languages, you are trying to express ideas to a computer, so it can translate it for you. It’s a really smart friend that is right there who’s listening to you. You just need to speak to him.

Now I wanna talk about what languages maybe you should not start off with. I would recommend that you not start off with C or C++ or C# as your first programming language, pretty much any of the C languages. And the reason is because their syntactical difficulty level is much, much higher. So, you have to worry about each little, tiny detail or what you’re saying to a computer will make no sense whatsoever. So, not only are you bogged down by the conceptual difficulty when you’re first learning computer programming, you’re also bogged down by the syntactical difficulty, you know, which is literally how you’re typing something. It needs a curly brace, it needs this, it needs that, embellishments. Almost like if you were writing English and you’re first learning it, if you don’t write comma or period or something, you’re just not get understood at all by another person. That would be very frustrating to you if English was your first language because you’d be like, this is so… I don’t know whether my problem lies in the fact that I’m speaking my ideas correct, the way I’m expressing it, or am I having some kind of grammatical error. So, start with something that’s… Just focus on the conceptual stuff first, okay?

“hello world” on different programming languages

So, start with something that has a syntactical difficulty that’s lower. Understand the concepts? In computer programming, you have a concept of looping over things. So, if it’s one task that you want to do over and over again, that’s called a “loop”. Well, if you learn what a loop is in one language, you can very easily go to another language and apply the same loop. If you learn what a “if” condition is, if this, then that, right? If Johnny worked over 40 hours, Johnny gets paid overtime. That idea, if you can learn to express it in one language, you can very easily express it in another. So, my recommendation to you guys to start with the Python as a first programming language! And Python, there’s a reason why it’s the top growing, one of the top growing languages in the world. And if you look at most of the top universities, they’re actually making that the introduction language. And the reason why they’re doing that is because of how elegant and simple it is. Now, don’t get its simplicity confused with maybe that it can’t do more. Remember, Google, Dropbox, Pinterest, Reddit, Instagram, and I can go on, but those were all built off of Python. Even BitTorrent. So, all of these things were actually built off of Python. Youtube was built off of Python. Sergey Brin, I think that’s who actually said it, the person who founded Google pretty much. He said “Python where we can, C where we must,” which means, we want to use the C programming language where we have to and Python programming language where we can. Okay? Now, in terms of the languages and where they work… So, let’s say you learn the language Python and you start off with that language. Now, you can always branch out and go to other languages. Once you understand the conceptual difficulty and the conceptual stuff, you can translate that over to any other language. Now, Python is a powerful language that allows you to do pretty much anything that you want. It’s also important to understand some of the differences between why you should pick another language. Now let’s talk about maybe once you learned your first programming language at a level of certain proficiency and you wanna transition over to another language, why would you wanna choose between different languages and work with one rather than the other? In reality, you’re working with a lot of languages at the same time, okay?

Now, the reason why you wanna pick one language and another is because of its benefits. For example, if you wanna do a lot of stuff related to web development, like front end, where nice graphics show and you click on something and things respond really fast, that is called JavaScript. That’s the beauty and magic of JavaScript, where you click on things and they drag down and drop down menus and all this really fancy stuff that you see on the web. That’s done by JavaScript. So, for that, you want to learn JavaScript. Should you start off with JavaScript? Maybe. I would still say you start off with Python, then transition over to JavaScript. Just to give you an example, I worked on a project that was 30 hours long. It’s called Agile Directions. I might just even link it below, so you guys actually can click it and check it out. I did not know a single letter of JavaScript, but since I knew Python so well, I just went on. I would go online and just look up how to do things that I would do in Python, how to do them in JavaScript, and I was able to bust out that entire giant project in two days. So, yeah. Now, web languages, you have JavaScript. For data science and a lot of data analytical stuff and scientific stuff, and even things like building responsive websites, Python can help you. Python brings you a tremendous amount of power and it makes it very easy to do really complicated things. Why you might wanna learn C or C++ (as your first programming language) may be based on the speed that you need. C is really known for its efficiency. So, you might want to learn those languages if you’re working in, say, you’re doing algorithmic trading, where you have to make each trade within a nanosecond. You’re making thousands of trades in nanoseconds, right? You need something that’s really freaking fast. So, C might actually be a good idea or C++. Linus Torvalds hates C++. He loves C, so that’s why I keep saying C first. So, that’s why. Java, probably one of the same reasons. Video game design, very popular, powerful language. Android applications, Java is very powerful. Okay guys, hopefully that gives you a little bit of an overview about different languages and why you should pick one of them as your first programming language. I would say start off with a very simple language. Python would be my go-to choice for you.

]]>https://www.binarymove.com/first-programming-language/feed/0617Transistor, How do they work?https://www.binarymove.com/transistors-how-do-they-work/
https://www.binarymove.com/transistors-how-do-they-work/#respondFri, 06 Oct 2017 10:50:29 +0000https://www.binarymove.com/?p=589In the mobile phone, there are nearly 100 million transistors, in the computer, there’s over a billion. The transistor is in virtually every electronic device we use: TV’s, radios, Tamagotchis. But how does it work? Well, the basic [...]

]]>In the mobile phone, there are nearly 100 million transistors, in the computer, there’s over a billion. The transistor is in virtually every electronic device we use: TV’s, radios, Tamagotchis. But how does it work? Well, the basic principle is actually incredibly simple. It works just like this switch, so it controls the flow of electric current. It can be off, so you could call that the zero state or it could be on, the one state.switch-transistorthe atom of silicon

And this is how all of our information is now stored and processed, in zeros and ones, little bits of electric current. But unlike the switch, a transistor doesn’t have any moving parts. And it also doesn’t require a human controller. Furthermore, it can be switched on and off much more quickly than a human can flick this switch. And finally, and most importantly it is incredibly tiny. Well, this is all thanks to the miracle of semiconductors or rather the science of semiconductors. Pure silicon is a semiconductor, which means it conducts electric current better than insulators but not as well as metals. This is because an atom of silicon has four electrons in its outermost or valence shell. This allows it to form bonds with its four nearest neighbors. So it forms a tetrahedral crystal.

tetrahedral-crystal

But since all these electrons are stuck in bonds, few ever get enough energy to escape their bonds and travel through the lattice. So having a small number of mobile charges is what makes silicon a semiconductor. Now, this wouldn’t be all that useful without a semiconductor’s secret weapon – DOPING. You’ve probably heard of doping, it’s when you inject a foreign substance in order to improve performance. There are two types of doping called n-type and p-type.

n-type and p-type – doping

To make n-type semiconductor, you take pure silicon and inject a small mount of an element with 5 valence electrons, like Phosphorous. This is useful because Phosphorous is similar enough to silicon that it can fit into the lattice, but it brings with it an extra electron. So this means now the semiconductor has more mobile charges and so it conducts current better. In p-type doping, an element with only three valence electrons is added to the lattice. To make n-type semiconductor, you take pure silicon and inject a small mount of an element with 5 valence electrons, like Phosphorous. This is useful because Phosphorous is similar enough to silicon that it can fit into the lattice, but it brings with it an extra electron. So this means now the semiconductor has more mobile charges and so it conducts current better. In p-type doping, an element with only three valence electrons is added to the lattice.

electrons moving

Now, this creates a ‘hole’ – a place where there should be an electron, but there isn’t. But this still increases the conductivity of the silicon because electrons can move into it. Now although it is electrons that are moving, we like to talk about the holes moving around – because there’s far fewer of them. Now since the hole is the lack of an electron, it actually acts as a positive charge. And this is why p-type semiconductor is actually called p-type. The p stands for positive – it’s positive charges, these holes, which are moving and conducting the current. Now it’s a common misconception that n-type semiconductors are negatively charged and p-type semiconductors are positively charged. That’s not true, they are both neutral because they have the same number of electrons and protons inside them. The n and the p actually just refer to the sign of charge that can move within them. So in n-type, it’s negative electrons which can move, and in p-type it’s a positive hole that moves. But they’re both neutral! A transistor is made with both n-type and p-type semiconductors.

transistor common configuration

A common configuration has n on the ends with p in the middle. Just like a switch, a transistor has an electrical contact at each end and these are called the source and the drain. But instead of a mechanical switch, there is a third electrical contact called the gate, which is insulated from the semiconductor by an oxide layer. When a transistor is made, the n and p-types don’t keep to themselves – electrons actually diffuse from the n-type, where there are more of them into the p-type to fill the holes. This creates something called the depletion layer. What’s been depleted?

depletion layer

Charges that can move. There are no more free electrons in the n-type. Why? Because they’ve filled the holes in the p-type. Now this makes the p-type negative thanks to the added electrons. And this is important because the p-type will now repel any electrons that try to come across from the n-type. So the depletion layer actually acts as a barrier, preventing the flow of electric current through the transistor. So right now the transistor is off, it’s like an open switch, it’s in the zero state.

electrons moving

To turn it on, you have to apply a small positive voltage to the gate. This attracts the electrons over and overcomes that repulsion from the depletion. It actually shrinks the depletion layer so that electrons can move through and form a conducting channel. So the transistor is now on, it’s in the one state. This is remarkable because just by exploiting the properties of a crystal we’ve been able to create a switch that doesn’t have any moving parts, that can be turned on and off very quickly just with a voltage, and most importantly it can be made tiny. Transistors today are only about 22nm wide, which means they are only about 50 atoms across. But to keep up with Moore’s law, they’re going to have to keep getting smaller. Moore’s Law states that every two years the number of transistors on a chip should double. And there is a limit, as those terminals get closer and closer together, quantum effects become more significant and electrons can actually tunnel from one side to the other. So you may not be able to make a barrier high enough to stop them from flowing. Now this will be a real problem for the future of transistors, but we’ll probably only face that another ten years down the track.

]]>https://www.binarymove.com/transistors-how-do-they-work/feed/0589QUANTUM COMPUTER how it works?https://www.binarymove.com/quantum-computer-how-it-works/
https://www.binarymove.com/quantum-computer-how-it-works/#respondFri, 01 Sep 2017 07:50:46 +0000https://www.binarymove.com/?p=551A classical computer performs operations using classical bits, which can be either zero or one. Now, in contrast, a quantum computer uses quantum bits or qubits. And they can be both zero and one at the [...]

A classical computer performs operations using classical bits, which can be either zero or one. Now, in contrast, a quantum computer uses quantum bits or qubits. And they can be both zero and one at the same time. And it is this that gives a quantum computer its superior computing power. There are a number of physical objects that can be used as a qubit. A single photon, a nucleus or an electron. Some researchers using the outermost electron in phosphorous as a qubit. But how does that work? Well, all electrons have magnetic fields, so they are basically like tiny bar magnets. And this property is called spin. If you place them in a magnetic field they will align

with that field, just like a compass needle lines up with the magnetic field of the earth. Now, this is the lowest energy state, so you could call it the zero state or we call it for the electron, spin down. Now you can put it in a one state, or spin up, but that takes some energy. If you took out the glass from your compass you could turn the needle the other way, but you would have to apply some force to it. You have to push it to flip to the other side. And that is the highest energy state. In principle, if you were so delicate to really put it exactly against the magnetic field, it would stay there. Now so far this is basically just like a classical bit.

Spin UP Spin DOWN electron in quantum superposition

It has got two states, spin up and spin down, which are like the classical one and zero. But the funny thing about quantum objects is that they can be in both states at once. When you measure the spin it will be either up or down.

But before you measure it, the electron can exist in what is called a quantum superposition, where these coefficients indicate the relative probability of finding the electron in one state or the other. It is hard to imagine how this enables this incredible computing power of quantum computers without considering two interacting quantum bits. Now there are four possible states of these two electrons.

electron statesqubit-to-bit table

You could think that, well, that is just like two bits of a classical computer. If you have two bits you can write 00; 01; 10; 11. There is four numbers. But these are still just two bits of information. Right? Determine which one of the four numbers you have in your computer code is the value of the first bit and the value of the second bit.

Quantum mechanics allowsto make superposition of each one of these four states. So determine the state of this two spin system, I need to give you four numbers, four coefficients, whereas, in the classical example of the two bits, I only need to give you two bits.

So this is how you understand why two qubits actually contain four bits of information. I need to give you four numbers to tell you the state of this system, whereas here I only need two. Now if we make three spins, we would have eight different states and it could give you eight different numbers to define the state of those three spins, whereas classical it is just three bits. If you keep going, what you find is that the amount of equivalent classical information contained by N qubits is two to the power N classical bits.

And, of course, the power of exponentials tells you that once you have, let’s say, 300 of those qubits in what we call the folient angle state, so you must be able to create these really crazy states where there is a superposition of all three angles being one way and another way and another way and so on, then you have like two to the 300 classical bits, which is as many particles as there are in the universe. But there is a catch, although the qubits can exist in any combination ofstates, when they are measured they must fall into one of the basis states. And all the other information about the state before the measurement is lost. So you don’t want generally to have as the final result of your quantum computation something that is a very complicated super positional state, because our cannot measure a superposition. You can only measure one of these basis states. Like down, down, up, up. Yeah. So what you want is to design the logical operations that you need to get to the final computational result in such a way that the final result is something you are able to measure, just a unique state.

I guess it is to some degree the reason why quantum computers are not a replacement of classical computers. They are not universally faster. They are only faster for special types of calculations where you can use the fact that you have all these quantum super positions available to you at the same time, to do some kind of computational parallelism. If you just want to watch a video in high definition or browse the internet or write some documenting work, they are not going to give you any particular improvement if you need to use a classical algorithm to get the result. So you should not think of a quantum computer as something where every operation is faster. In fact, every operation is probably going to be slower than in the computer you have at your desk. But it is a computer where the number of operations required to arrive at the result is exponentially small. So the improvement is not in the speed of the individual operation. It is in the total number of operations you need to arrive at the result. But that is only the case in particular types of calculations, particular algorithms. It is not universally, which is why it is not a replacement of a classical computer.