///////////////////////////////////////////////////////////////////////////////////////////////
// Polymorphic Class are the classes that use virtual function
//————————————————————-
// Whenever using dynamic_cast Enable Run-Time Type Information.
//
// To find this option in the development environment,
// click Settings on the Project menu. Then click the C/C++ tab,
// and click C++ Language in the Category box.
//————————————————————-
// Run-time type information (RTTI) is a mechanism that allows the type of an object
// to be determined during program execution. RTTI was added to the C++ language because
// many vendors of class libraries were implementing this functionality themselves.
// This caused incompatibilities between libraries. Thus, it became obvious that support
// for run-time type information was needed at the language level.
//
// For the sake of clarity, this discussion of RTTI is almost completely restricted to pointers
// However, the concepts discussed also apply to references.
//
// There are three main C++ language elements to run-time type information:
//
// *The dynamic_cast operator.
// Used for conversion of polymorphic types. See dynamic_cast Operator for more information.
//
// *The typeid operator.
// Used for identifying the exact type of an object.
//
// *The type_info class.
// Used to hold the type information returned by the typeid operator.
///////////////////////////////////////////////////////////////////////////////////////////////

The Fourier transform is a way to decompose a signal into its constituent frequencies. The Fourier transform comes in three varieties: The plain old Fourier transform, The Fourier series, and The discrete Fourier transform.

To get an idea of what the DFT does, consider an MP3 player plugged into a loudspeaker. The MP3 player sends the speaker audio information as fluctuations in the voltage of an electrical signal. Those fluctuations cause the speaker drum to vibrate, which in turn causes air particles to move, producing sound.

An audio signal’s fluctuations over time can be depicted as a graph: the x-axis is time, and the y-axis is the voltage of the electrical signal, or perhaps the movement of the speaker drum or air particles. Either way, the signal ends up looking like an erratic wavelike squiggle (random scrawl). But when you listen to the sound produced from that squiggle, you can clearly distinguish all the instruments in a symphony orchestra, playing discrete notes at the same time. That’s because the erratic squiggle is, effectively, the sum of a number of much more regular squiggles, which represent different frequencies of sound.

“Frequency” just means the rate at which air molecules go back and forth, or a voltage fluctuates, and it can be represented as the rate at which a regular squiggle goes up and down. When you add two frequencies together, the resulting squiggle goes up where both the component frequencies go up, goes down where they both go down, and does something in between where they’re going in different directions.

The DFT does mathematically what the human ear does physically: decomposes a signal into its component frequencies. Unlike the analog signal from, say, a record player, the digital signal from an MP3 player is just a series of numbers, representing very short samples of a real-world sound: CD-quality digital audio recording, for instance, collects 44,100 samples a second. If you extract some number of consecutive values from a digital signal — 8, or 128, or 1,000 — the DFT represents them as the weighted sum of an equivalent number of frequencies. (“Weighted” just means that some of the frequencies count more than others toward the total.)

The application of the DFT to wireless technologies is fairly straightforward: the ability to break a signal into its constituent frequencies lets cell-phone towers, for instance, disentangle transmissions from different users, allowing more of them to share the air.

Also It is being use to generate and filter cell-phone and Wi-Fi transmissions, to compress audio, image, and video files so that they take up less bandwidth, and to solve differential equations, among other things.

The application to data compression is less intuitive. But if you extract an eight-by-eight block of pixels from an image, each row or column is simply a sequence of eight numbers — like a digital signal with eight samples. The whole block can thus be represented as the weighted sum of 64 frequencies. If there’s little variation in color across the block, the weights of most of those frequencies will be zero or near zero. Throwing out the frequencies with low weights allows the block to be represented with fewer bits but little loss of fidelity.

Discrete Fourier Transform (DFT), a method that can be applied to a collection of real-world data points. The Discrete Fourier Transform (DFT) (time domain to frequency domain) is defined as:

While the Inverse Discrete Fourier transform (IDFT) (frequency domain to time domain) is defined as:

Where:

x(n)is an array of complex time-domain data.

n is an index of time steps.

X(k)is an array of complex frequency-domain data.

k is an index of frequency spectral lines.

N represents the size of the data arrays.

It may not be clear to the nontechnical reader, but the DFT and IDFT expressions above require nested loops, one for n and one for k, as a result the solution time increases as O(N2), consequently this method is not practical for most real-world data sets. The value of the DFT lies, not in processing data, but in clarifying technical issues.

Yes. The good news is that these “memory pools” are useful in a number of situations. The bad news is that I’ll have to drag you through the mire of how it works before we discuss all the uses. But if you don’t know about memory pools, it might be worthwhile to slog through this article you might learn something useful!

First of all, recall that a memory allocator is simply supposed to return uninitialized bits of memory; it is not supposed to produce “objects.” In particular, the memory allocator is not
supposed to set the virtual-pointer or any other part of the object, as that is the job of the
constructor which runs after the memory allocator. Starting with a simple memory allocator function, allocate(), you would use placement new to construct an object in that memory. In other words, the following is morally equivalent to new Foo():

Okay, assuming you’ve used placement new and have survived the above two lines of code, the next step is to turn your memory allocator into an object. This kind of object is called a “memory pool” or a “memory arena.” This lets your users have more than one “pool” or “arena” from which memory will be allocated. Each of these memory pool objects will allocate a big chunk of memory using some specific system call (e.g., shared memory, persistent memory, stack memory, etc.; see below), and will dole it out in little chunks as needed. Your memory-pool class might look something like this:

class Pool {public: void* alloc(size_t nbytes); void dealloc(void* p);private: … // data members used in your pool object…};void* Pool::alloc(size_t nbytes){ …your algorithm goes here…}void Pool::dealloc(void* p){ …your algorithm goes here…}
Now one of your users might have a Pool called pool, from which they could allocate objects like this:
Pool pool;
…void* raw = pool.alloc(sizeof(Foo));Foo* p = new(raw) Foo();Or simply:Foo* p = new(pool.alloc(sizeof(Foo))) Foo();
The reason it’s good to turn Pool into a class is because it lets users create N different pools of memory rather than having one massive pool shared by all users. That allows users to do lots of funky things. For example, if they have a chunk of the system that allocates memory like crazy then goes away, they could allocate all their memory from a Pool, then not even bother doing any deletes on the little pieces: just deallocate the entire pool at once. Or they could set up a “shared memory” area (where the operating system specifically provides memory that is shared between multiple processes) and have the pool dole out chunks of shared memory rather than process-local memory. Another angle: many systems support a non-standard function often called alloca() which allocates a block of memory from the stack rather than the heap. Naturally this block of memory automatically goes away when the function returns, eliminating the need for explicit deletes. Someone could use alloca() to give the Pool its big chunk of memory, then
all the little pieces allocated from that Pool act like they’re local: they automatically vanish when the function returns. Of course the destructors don’t get called in some of these cases, and if the destructors do something nontrivial you won’t be able to use these techniques, but in cases where the destructor merely deallocates memory, these sorts of techniques can be useful.

Okay, assuming you survived the 6 or 8 lines of code needed to wrap your allocate function as a method of a Pool class, the next step is to change the syntax for allocating objects. The goal is to change from the rather clunky syntax new(pool.alloc(sizeof(Foo))) Foo() to the simpler syntax new(pool) Foo(). To make this happen, you need to add the following two lines of code just below the definition of your Pool class:inline void* operator new(size_t nbytes, Pool& pool){ return pool.alloc(nbytes);}
Now when the compiler sees new(pool) Foo(), it calls the above operator new and passes
sizeof(Foo) and pool as parameters, and the only function that ends up using the funky
pool.alloc(nbytes) method is your own operator new.
Now to the issue of how to destruct/deallocate the Foo objects. Recall that the brute force
approach sometimes used with placement new is to explicitly call the destructor then explicitly deallocate the memory:void sample(Pool& pool){ Foo* p = new(pool) Foo(); … p->~Foo(); // explicitly call dtor pool.dealloc(p); // explicitly release the memory}
This has several problems, all of which are fixable:
The memory will leak if Foo::Foo() throws an exception.
The destruction/deallocation syntax is different from what most programmers are used to, so they’ll probably screw it up.
Users must somehow remember which pool goes with which object. Since the code that allocates is often in a different function from the code that deallocates, programmers will have to pass around two pointers (a Foo* and a Pool*), which gets ugly fast (example, what if they had an array of Foos each of which potentially came from a different Pool; ugh).
We will fix them in the above order.
Problem #1: plugging the memory leak. When you use the “normal” new operator, e.g., Foo* p = new Foo(), the compiler generates some special code to handle the case when the constructor throws an exception. The actual code generated by the compiler is functionally similar to this:// This is functionally what happens with Foo* p = new Foo()Foo* p; // don’t catch exceptions thrown by the allocator itselfvoid* raw = operator new(sizeof(Foo)); // catch any exceptions thrown by the ctortry { p = new(raw) Foo(); // call the ctor with raw as this}catch (…) { // oops, ctor threw an exception operator delete(raw); throw; // rethrow the ctor’s exception}
The point is that the compiler deallocates the memory if the ctor throws an exception. But in the case of the “new with parameter” syntax (commonly called “placement new”), the compiler won’t know what to do if the exception occurs so by default it does nothing:// This is functionally what happens with Foo* p = new(pool) Foo():void* raw = operator new(sizeof(Foo), pool);// the above function simply returns “pool.alloc(sizeof(Foo))”Foo* p = new(raw) Foo();
// if the above line “throws”, pool.dealloc(raw) is NOT called
So the goal is to force the compiler to do something similar to what it does with the global new operator. Fortunately it’s simple: when the compiler sees new(pool) Foo(), it looks for a corresponding operator delete. If it finds one, it does the equivalent of wrapping the ctor call in a try block as shown above. So we would simply provide an operator delete with the following signature (be careful to get this right; if the second parameter has a different type from the second parameter of the operator new(size_t, Pool&), the compiler doesn’t complain; it simply bypasses the try block when your users say new(pool) Foo()):void operator delete(void* p, Pool& pool){ pool.dealloc(p);}
After this, the compiler will automatically wrap the ctor calls of your new expressions in a try block:
// This is functionally what happens with Foo* p = new(pool) Foo()
Foo* p;
// don’t catch exceptions thrown by the allocator itselfvoid* raw = operator new(sizeof(Foo), pool);// the above simply returns “pool.alloc(sizeof(Foo))”// catch any exceptions thrown by the ctortry { p = new(raw) Foo(); // call the ctor with raw as this}catch (…) { // oops, ctor threw an exception operator delete(raw, pool); // that’s the magical line!! throw; // rethrow the ctor’s exception}
In other words, the one-liner function operator delete(void* p, Pool& pool) causes the compiler to automagically plug the memory leak. Of course that function can be, but doesn’t have to be, inline.
Problems #2 (“ugly therefore error prone”) and #3 (“users must manually associate pool-pointers with the object that allocated them, which is error prone”) are solved simultaneously with an additional 10-20 lines of code in one place. In other words, we add 10-20 lines of code in one place (your Pool header file) and simplify an arbitrarily large number of other places (every piece of code that uses your Pool class).
The idea is to implicitly associate a Pool* with every allocation. The Pool* associated with the global allocator would be NULL, but at least conceptually you could say every allocation has an associated Pool*. Then you replace the global operator delete so it looks up the associated Pool*, and if non-NULL, calls that Pool’s deallocate function. For example, if(!) the normal deallocator used free(), the replacment for the global operator delete would look something like this:void operator delete(void* p){ if (p != NULL) { Pool* pool = /* somehow get the associated ‘Pool*’ */; if (pool == null) free(p); else pool->dealloc(p); }}
If you’re not sure if the normal deallocator was free(), the easiest approach is also replace the global operator new with something that uses malloc(). The replacement for the global operator new would look something like this (note: this definition ignores a few details such as the new_handler loop and the throw std::bad_alloc() that happens if we run out of memory):void* operator new(size_t nbytes){

}
The only remaining problem is to associate a Pool* with an allocation. One approach, used in at least one commercial product, is to use a std::map. In other words, build a look-up table whose keys are the allocation-pointer and whose values are the associated Pool*. For reasons I’ll describe in a moment, it is essential that you insert a key/value pair into the map only in operator new(size_t,Pool&). In particular, you must not insert a key/value pair from the global operator new (e.g., you must not say, poolMap[p] = NULL in the global operator new). Reason: doing that would create a nasty chicken-and-egg problem since std::map probably uses the global operator new, it ends up inserting a new entry every time inserts a new entry, leading to infinite recursion bang you’re dead.

Even though this technique requires a std::map look-up for each deallocation, it seems to have acceptable performance, at least in many cases.

Another approach that is faster but might use more memory and is a little trickier is to prepend a Pool* just before all allocations. For example, if nbytes was 24, meaning the caller was asking to allocate 24 bytes, we would allocate 28 (or 32 if you think the machine requires 8-byte alignment for things like doubles and/or long longs), stuff the Pool* into the first 4 bytes, and return the pointer 4 (or 8) bytes from the beginning of what you allocated. Then your global operator delete backs off the 4 (or 8) bytes, finds the Pool*, and if NULL, uses free() otherwise calls pool->dealloc(). The parameter passed to free() and pool->dealloc() would be the pointer 4 (or 8) bytes to the left of the original parameter, p. If(!) you decide on 4 byte alignment, your code would look something like this (although as before, the following operator new code elides the usual out-of-memory handlers):void* operator new(size_t nbytes){ if (nbytes == 0) nbytes = 1; // so all alloc’s get a distinct address void* ans = malloc(nbytes + 4); // overallocate by 4 bytes *(Pool**)ans = NULL; // use NULL in the global new return (char*)ans + 4; // don’t let users see the Pool*}void* operator new(size_t nbytes, Pool& pool){ if (nbytes == 0) nbytes = 1; // so all alloc’s get a distinct address void* ans = pool.alloc(nbytes + 4); // overallocate by 4 bytes *(Pool**)ans = &pool; // put the Pool* here return (char*)ans + 4; // don’t let users see the Pool*}void operator delete(void* p){ if (p != NULL) { p = (char*)p – 4; // back off to the Pool* Pool* pool = *(Pool**)p; if (pool == null) free(p); // note: 4 bytes left of the original p else pool->dealloc(p); // note: 4 bytes left of the original p }}
Naturally the last few paragraphs of this article are viable only when you are allowed to change the global operator new and operator delete. If you are not allowed to change these global functions, the first three quarters of this article is still applicable.

The liaison between software development teams and management has always been a sticking point in IT. Both groups tend to think about a given problem in fundamentally different ways. A lot of science addresses how project managers should track and interpret the progress and issues of developers. But still breakdown in communication is very common and is a leading cause of project failure. A good architect is the most effective known cure for this problem. An architect’s primary responsibility is to provide shared media for communication between developers and project managers. They are responsible for fitting business rules and requirements with engineering practices and limitations to ensure success. Following are some of the key traits of a successful architect.

Willingness and ability to communicate: The most valuable principle when identifying an architect among developers is effective communication. You want to look for skilled and experienced developers who have a history of taking the initiative to communicate with business interests in projects. Architects often have to anticipate gaps in understanding before they can contribute. They have to be willing to go out on a limb to ensure a meeting of the technical and business minds. They don’t have to schedule and coordinate exchanges; this is still generally the job of a project manager. Their task is to determine the best tools and artifacts for expressing the design of a system in a way that facilitates an effective exchange. They must be able to sense when current methods are falling short and a new approach is needed. Writing skills are also important, as are drafting skills, or the ability to use diagramming or charting software.

Experience negotiating details: An architect often has to lead discussions of technical compromises for systems development. Conflicting priorities might involve practical limitations, risk avoidance, or possibly differences in requirements among various business groups. A good architect can efficiently assess the technical possibilities and chart a course for development that addresses various interests and limitations without losing the essential value of the project. This ties into communications skills discussed previously, but also taps into the architect’s technical ability. A good architect candidate would be a developer who often helps steer contentious discussions toward new ideas, and doesn’t become entrenched in one position.

Self-starter; motivated to solve design problems: An architect’s day-to-day goals are often unclear. Many developers simply look at a functional specification to carve out a task list. An architect is usually the one providing these developers the structure required to maximize efficiency. A good candidate takes the initiative not only in communicating, but also in anticipating and tackling design issues — usually without any specific directive. A developer who stays busy and engaged in the project, regardless of assigned responsibility, has an opportunity to shine among his peers.

Abstract thinking and analysis: Architects must be able to take a vaguely expressed concept and turn it into a project artifact that can be appreciated by the interested parties. They must able to appreciate abstract concepts and to communicate them in concrete terms. Good candidates among developers are often called upon, or take it upon themselves, to explain confusing issues in the development life cycle. They are quick to assess ideas and direct them into practical suggestions for moving forward.

Developers often are mathematically strong, whereas good architects tend to be stronger verbally. An “engineering mindset,” often ascribed to developers by managers, is an interesting prism through which to assess architects. Architects should have strong technical problem-solving skills, but they must also be able to grasp the bigger picture of how the people involved interact with technology. This requires a form of abstract thinking (beyond the bits and bytes of code) that can be difficult to master.

I tend to avoid elitism about what level of formal education is required to groom a good developer. I have seen amazing developers who are high-school drop-outs. However, when it comes to architecture, my personal experience as well as my appreciation of the required abilities makes me believe strongly that a good architect usually has attained at least a challenging baccalaureate degree.

Camera ISO is one of the three pillars of photography (the other two being Aperture and Shutter Speed) and every photographer should thoroughly understand it, to get the most out of their equipment.

What is ISO?

In very basic terms, ISO is the level of sensitivity of your camera to available light. The lower the ISO number, the less sensitive it is to the light, while a higher ISO number increases the sensitivity of your camera. The component within your camera that can change sensitivity is called “image sensor” or simply “sensor”. It is the most important (and most expensive) part of a camera and it is responsible for gathering light and transforming it into an image. With increased sensitivity, your camera sensor can capture images in low-light environments without having to use a flash. But higher sensitivity comes at an expense – it adds grain or “noise” to the pictures.

The difference is clear – the images on the right ISO 2000 & ISO 6400 have a lot more noise in it, than the one on the left at ISO 200.

Every camera has something called “Base ISO”, which is typically the lowest ISO number of the sensor that can produce the highest image quality, without adding noise to the picture.

So, optimally, you should always try to stick to the base ISO to get the highest image quality. However, it is not always possible to do so, especially when working in low-light conditions.

Typically, ISO numbers start from 100-200 (Base ISO) and increment in value in geometric progression (power of two). So, the ISO sequence is: 100, 200, 400, 800, 1600, 3200, 6400 and etc. The important thing to understand, is that each step between the numbers effectively doubles the sensitivity of the sensor. So, ISO 200 is twice more sensitive than ISO 100, while ISO 400 is twice more sensitive than ISO 200. This makes ISO 400 four times more sensitive to light than ISO 100, and ISO 1600 sixteen times more sensitive to light than ISO 100, so on and so forth. What does it mean when a sensor is sixteen times more sensitive to light? It means that it needs sixteen times less time to capture an image!

ISO Speed Example:

ISO 100 – 1 second

ISO 200 – 1/2 of a second

ISO 400 – 1/4 of a second

ISO 800 – 1/8 of a second

ISO 1600 – 1/16 of a second

ISO 3200 – 1/32 of a second.

In the above ISO Speed Example, if your camera sensor needed exactly 1 second to capture a scene at ISO 100, simply by switching to ISO 800, you can capture the same scene at 1/8th of a second or at 125 milliseconds! That can mean a world of difference in photography, since it can help to freeze motion.

These are seven consequential ways in which a manager or supervisor can create a work environment that will foster and influence increases in employee motivation – quickly.

Communicate responsibly and effectively any information employees need to perform their jobs most effectively. Employees want to be members of the in-crowd, people who know what is happening at work as soon as other employees know. They want the information necessary to do their jobs. They need enough information so that they make good decisions about their work.

Meet with employees following management staff meetings to update them about any company information that may impact their work. Changing due dates, customer feedback, product improvements, training opportunities, and updates on new departmental reporting or interaction structures are all important to employees. Communicate more than you think is necessary.

Stop by the work area of employees who are particularly affected by a change to communicate more. Make sure the employee is clear about what the change means for their job, goals, time allocation, and decisions.

Communicate daily with every employee who reports to you. Even a pleasant â€œgood morningâ€ enables the employee to engage with you.

Hold a weekly one-on-one meeting with each employee who reports to you. They like to know that they will have this time every week. Encourage employees to come prepared with questions, requests for support, troubleshooting ideas for their work, and information that will keep you from being blindsided or disappointed by a failure to produce on schedule or as committed.
Employees find interaction and communication with and attention from senior and executive managers motivational. In a recent study by Towers Perrin (now Towers Watson), the Global Workforce Study which included nearly 90,000 workers from 18 countries, the role of senior managers in attracting employee discretionary effort exceeded that of immediate supervisors.

Implement an open door policy for staff members to talk, share ideas, and discuss concerns. Make sure that managers understand the problems that they can and should solve will be directed back to them, but it is the executiveâ€™s job to listen.

Congratulate staff on life events such as new babies, inquire about vacation trips, and ask about how both personal and company events turned out. Care enough to stay tuned into these kinds of employee life events and activities.