Category: Computer science
(page 1 of 2)

This article is part of the series What you won’t learn in the basics courses and is aimed at people who have an understanding of programming, but want to gain a more deeper insight on how things work and why do they work that way.

Somewhere in the first lectures of a programming basics course, we are shown how to take input and show output on the terminal. That’s called standard input/output or just Standard IO for short.

So, in C# we have Console.WriteLine and Console.ReadLine.
In C++, we have cin and cout.

All these things are associated with the topic of Standard IO. And what they tell us is that the standard input is the keyboard and the standard output is the screen. And for the most part, that is the case.

But what we don’t get told is that the Standard IO can be changed. There is a way to accept input from a file and redirect output to another file. No, I’m not talking about writing code to read/write files. I am talking about using the Standard IO for the job, via the terminal.

This article is part of the series What you won’t learn in the basics courses and is aimed at people who have an understanding of programming, but want to gain a more deeper insight on how things work and why do they work that way.

Last time, we delved into bitwise operations. This time, we will look at a more high level computer science concept – algorithms.

When we first get introduced to algorithms, we normally start with learning sorting algorithms. In comparison to other algorithms, they are easier to grasp. And if we pay attention in class, we will do a good job at understanding them. However, what we don’t learn in these classes is when can they be useful.

This article is part of the series What you won’t learn in the basics courses and is aimed at people who have an understanding of programming, but want to gain a more deeper insight on how things work and why do they work that way.

Last time, we talked about character sets and encoding. This time, we will return to dealing with binary numbers. However, this time we won’t examine how binary numbers work and what is their nature. We have covered that in previous articles. Today, we will see how to apply that knowledge in practice by examining how bitwise operations work.

This topic is usually neglected in a traditional computer science curriculum (At least it is in some universities I know). But I think that this knowledge can be useful for two reasons:

Gaining a valuable tool which can be useful when pursuing specialization as a low-level programmer (Embedded developer, for example).

We will start by examining what tools do we have at our disposal – the operations which modern programming languages provide us with. Then we will move on to applying that knowledge for actually manipulating numbers in a binary fashion and finally – we will see some real-world examples of how bitwise operations are used to achieve a highly efficient system.

This article is part of the series What you won’t learn in the basics courses and is aimed at people who have an understanding of programming, but want to gain a more deeper insight on how things work and why do they work that way.

My last article was about different data types and some tricks with them. We talked a little about characters as well. However, working with them can be a little bit strange due to the presence of a fancy term in computing called encoding.

Today, my friend asked me to go and fix the subtitles for his movies. He had been telling me that some strange symbols appear all the time. So he tried reinstalling windows and changing all sorts of options but nothing seemed to work. He clearly had no idea what an encoding is. However, I guess that is normal since he doesn’t have a CS background. But there seems to be a lot of developers out there (me, including, in the old days) who don’t know what encoding means. Surely, they might have heard of UTF-8, but what is it? We have ASCII right?

Well, I am going to address the issue of encoding in this article as I think it is fundamental to anyone getting his hands dirty with programming and computing. It seems not many programming basics courses cover this topic in much detail.

This article is part of the series What you won’t learn in the basics courses and is aimed at people who have an understanding of programming, but want to gain a more deeper insight on how things work and why do they work that way.

In the past few weeks, we have discussed the different ways computers deal with binary numbers in order to represent the numbers we are used to see – positive, negative and real. This time, we will take a step back from diving in the details of how the hardware deals with such issues and focus on how the design decisions, taken by computer architects, affect the way we represent data in our code. Particularly, we shall explore the different “features” that data types, that we use in our code, have hidden for us.