Category Archives: Telecoms

Post navigation

It’s easy to think of capitalism as being an ever-present cultural artifact since most of us have grown up in countries where capitalism is the leading political and economic template. But capitalism as an ideology, and as a cultural manifestation, is a relatively recent phenomenon whose origins were violent and turbulent.

We are a trading species and haggling and opportunity seeking is part of the definition of being human. Almost every society has engaged in this form of activity from time immemorial. But in the late 17th century, and especially in England, something occurred which resulted in a departure from the norms that had prevailed for governing society for close to four thousand years.

Previously, religion provided the roadmap for morality, and hard work was promoted as the pathway to virtue. Royalty ruled over countries and provided order to their territories, which were constantly under threat. So remaining in power also meant engaging in war.

When taxes failed to meet the costs of war, King James the First, being the largest landowner in England, decided to increase his revenue by giving exclusive licenses for the production of a product, a trade or even a government service. This led to the construction of monopolies and in 17th century England, there were monopolies for almost everything from coal, bricks, food, and even belts and buttons.

But what King James, and monarchs after him, had not bothered to consider was the effect this was going to have on the culture of their society. As an increasing number of people tried to compete for these monopolies, the competition began to challenge submission. A growing number of new landlords, members of trading companies and cloth manufacturers began to gain a voice in the way the country was being run.

Juxtaposed with this transition was the English Civil War. Following the execution of King Charles the First, the monarchy was replaced by the Commonwealth. In a short span of 30 years, merchants and manufacturers went from being subjects to public personages with political power. Economic grievances thus became political issues and competition was seen as the sister of innovation. Gradually, the established hierarchy started to crack as new entrepreneurs began to emerge making society more fluid in the process.

The result was a departure from the old ways of thinking which ignited commentary, debate and explanations. As the traditional order was overturned, people began to change their ideas about fundamental values. Previously, change to the order was regarded as blasphemy. But the growing prosperity offered by capitalism encouraged individuals to take risks, question the status quo, challenge authority, and be less fearful of innovation and novelty.

Today, this might seem almost like common sense. But this mode of thinking was the result of the birth of capitalism. It was a renegade mode of thought that gave an attitude to the way men and women thought, and challenged the values, habits and modes of reasoning of the past. It was fraught with opposition, was the bedrock of revolutions in almost every developed country, including the Industrial revolution, and has defeated Socialism, Marxism, Communism or any other kind of ideological ‘ism’ till date.

As the centuries rolled on, capitalism spread like a prosperity juggernaut and in doing so, changed our mindsets and ingrained the concept of free markets and profit maximization as natural law. We believed the economists who preached it from their podiums and policy makers who used it as a yardstick for policy construction.

But just as King James, blind sighted by short-term gain, failed to see the cultural ramifications of capitalism, we too as a society have been oblivious to the cultural impacts of capitalism as we pursued our hedonistic objectives – both as individuals and societies, especially in the past few decades.

Capitalism’s Waltz With Debt

Following the second world war, advanced economies in the West installed a ‘state-administered’ version of capitalism where economic growth coexisted with social and political stability. For almost three decades, this version of capitalism became the gold standard of government and remains the nostalgic fodder for politicians’ campaigns.

But by the end of 1970’s, this formula for success began to crumble with the outbreak of the oil embargo and the Iraq-Iran war. As oil prices began to soar, industries began to suffer and the newly elected governments in the US and the UK realized that they needed to find radical new ways to create economic growth to stay in power.

One of the ways they responded to this crisis was by privatizing industries, a move that is now referred to as Reagan-Thatcherism. The strategy was simple – If companies and households could not earn their revenue as they did before, they could lend their way to it. As public services were privatized, economic decision making was taken from government and handed to Wall Street.

As time went on, this process was amplified. When globalization began to take manufacturing jobs, high-paid skilled jobs were replaced by low-wage jobs in the service industries. Wages stagnated and people began to earn less than they did before.

In the process of finding a solution, governments once again turned to the commercial banks and relaxed lending restrictions. Even if wages were static, people could borrow money and maintain a certain lifestyle. As a result, the ability to manage society and economics slid gently from the control of the state to the commercial banks and financial markets.

The cultural ramifications were that debt-based capitalism became a norm and an increasing amount of importance was given to growth and consumerism. We went from being ‘citizens’ to ‘consumers’. Free market policies and regulations that would aid in providing more debt (see the repeal of the Glass-Stegall Act) were one of the main tenets of macroeconomic policy.

The result is something we all know too well and which we are still recovering from – In 2008, our debt-based capitalistic system collapsed under the weight of unsustainable loans, as our hedonistic pursuits become collectively unsustainable.

How did we get into this mess? How did we go from using capitalism as a way of escaping dictatorial doctrines to a society that was wasteful, indifferent and voracious? How did we not see this cultural change in which having more and consuming became paramount to anything else? The more money, freedom, leisure and luxury we got, the more we wanted.

One of the main reasons for this was the overall acceptance of contemporary free market capitalistic theories and the policies and models that were based on these theories. Free market economic models (and the policies that are based on it) are based on two main theories – the Rational Expectations Theory (RET) and the Efficient Markets Hypothesis (EMH).

These two theories work in conjunction. The RET states that the expectations of players in a market are formed on the basis of the information that they have and their past experiences. According to this theory, stock prices reflect all the information regarding a stock. Thus, by using the stock price as a proxy, we have access to all the information regarding a stock on the market, and can make rational decisions based on our expectations.

If someone makes a bad decision, then it is offset by someone who makes a good decision. The market is thus in a constant state of efficiency where the price of a share intrinsically incorporates all the relevant information. As a consequence, markets are in a state of equilibrium (EMH). We might have a shock now and then but the market always goes back to its natural state, which is that of equilibrium.

Our faith in these theories has resulted in accepting that the bad actions of one economic agent would be offset by the good actions of another. As more debt was offered to us, we accepted it as it was now the norm. In the process of handing over responsibility to the ‘other’, we believed in the banks, bought their complex debt-based financial instruments (CDOs, CDSs, etc.) and listened to the bastions of the system as they preached about its benefits from their pulpits.

“These increasingly complex financial instruments have contributed to the development of a far more flexible, efficient, and hence resilient financial system than the one that existed just a quarter-century ago. After the bursting of the stock market bubble in 2000, unlike previous periods following large financial shocks, no major financial institution defaulted, and the economy held up far better than many had anticipated” – Alan Greenspan, the Chairman of the Federal Reserve, (2005).

Those who challenged the status quo were ostracized from academia and markets, while the players in the economy were happy to go on taking debt and serendipitously advance towards a false sense of prosperity, leaving the hard business of understanding the macroeconomics of the system to people who were elected to do so.

But following the crisis, we need to take a step back and question this reality and the abilities of these economic bastions. We need to ask ourselves if the entire structure of analysis and the level of understanding of the experts needs to be questioned. This statement was best summarized by the Queen of England when she visited the LSE in 2007 and asked the economists in the room a very simple question,

“Why did no one see this coming?”

Thus, conducting a review of economic dogmas necessitates a review of the very basics, i.e., RET & EMH. More importantly, it requires a change in mindset and asking the right questions. Consider this – If our economy is supposed to be based on patterns of equilibrium, how does it explain a capitalistic system’s obsession with constant growth? Think about it – If you work in a company or are creating one, then the primary objective is to grow. Every company is constantly trying to expand, or buy out or merge with another company. It seems that it is never happy where it is and even if a company wishes to remain constant, its competitors will force it to move.

So if this state of entropy is the natural state of affairs, then why do we use equilibrium-based economic models to witness economics and pass public policies (monetary and fiscal)? We can extend that line of questioning and even challenge the RET – If all decisions are rational, then why do people engage in philanthropy? From a personal objective perspective, it would seem to be the most irrational thing to do.

In spite of these abhorrent divergences from reality, we continue to pass policies and enact measures based on analyses done using DSGE (Dynamic Stochastic General Equilibrium) models. It’s like the equivalent of bringing a turtle to a rabbit race.

Not questioning these fundamentals has been one of the primary causes of our addiction to debt and our laissez-faire attitude that has allowed banks to get bigger and more entrenched in every aspect of policy making. Therefore, what is required is a revisit to the very fundamentals on which we base our understanding of the economy.

We need to move from equilibrium-based models to entropy-based ones. Equilibrium can exist in an economy, but it is a temporary state. The natural state of an economy is entropic due to the constant exchanges and decisions being made by agents and actors in the economy. In short, we need to move towards complexity economics.

Complexity economics was devised based on this understanding of entropy and is a field that is gaining increasing prominence today. In a recent speech, Andy Haldane, the Chief Economist of Bank of England stated it is necessary for the bank to base its models on complexity science as it allows us to integrate complex, adaptive networks. Using these kinds of models, economic policy would be based on dynamic complex network analysis and behavioral modeling, thus giving us a template that is more suited to modeling reality.

The blockchain is many things to many people. But at its heart, it is an engine of transparency. While it is irrational to think of the blockchain as some kind of panacea that will be the solution to our current economic malaises, it is nevertheless a useful tool that can be used with other financial technologies to offer us more clarity of our capitalistic system. And this clarity is required today if we are to use capitalism to escape from a coterie of rulers, as we did in the 17th century.

This issue needs to be discussed today as enter an era where banks, academics and public institutions push us towards a cashless economy. Becoming a digital economy comes with benefits, but also comes with increased risk, owing to the speed, size and interconnectedness of the financial system. And one of the ways we can ignite this conversation is by leveraging the transparency that is provided by new financial technologies, especially the blockchain.

Transparency is a lop-sided issue in today’s financial markets, for as we transition to a cashless economy transparency seems to be a one-way street. As customers and citizens become digital avatars, banks and government institutions now have greater amounts of highly granular data with stark levels of detail.

But peculiarly, consumers, on the other hand, are not privy to the same amounts of transparency when it comes to the financial operations of banks and public governing bodies. A review of some of the scandals that have unfolded in the past three months highlights the need for two-way transparency:

Considering the size of these markets, and the fact that the biggest money launders are banks necessitates a conversation in the direction of transparency prior to moving to a cashless economy.

It is, therefore, urgent that we ascertain what kind of an economic reality we are headed towards before any infrastructural changes are put in place. Or else, the spread of a crisis in a hyper-connected digital economy will be faster and more violent than before.

The blockchain’s transparency is however only part of the equation. Transparency can be illuminating, but to transform transparency to clarity still requires effort. While the blockchain allows us to see what is going on, it would make no sense to use this information as an input to models based on equilibrium. If we are to truly leverage the potential of the blockchain and use it to define macroeconomic policies that are more representative of reality, it has to work in conjunction with complexity economics:

As we move into a more digital world with faster technological evolution providing strong headwinds of change, it is important to realize that adapting to this change will not simply be a case of investing in the new tool or updating one’s skillset. Capitalism has always been a renegade, whose greatest impacts have been born out of conflict and change. At every turn, this has required that tough questions be asked, and our notions and conceptions be rewritten.

If we are to continue growing with hedonistic sustainability, we need to better understand capitalism. The unison of complexity economics and the blockchain is a step in that direction and will entail the creation of new cultural forms, institutions and a new vocabulary for education. But with a better understanding of capitalism, people in democracies can play a much more positive and vigorous role in shaping their economic institutions.

There would be no capitalism without a culture of capitalism and there would be no culture if the existing ideologies were not challenged and overcome. At a time when information is so abundant that we can get the answer to any question, the real responsibility becomes asking the right question. If we fail to ask these questions and leverage the power of decentralization and transparency, we risk starting a conversation with the next generation by beginning with an apology.

Pretty much every flavour, from MBA graduates or successful professionals that discovered they want to suffer the freedom to start from scratch, to young developers in accelerator programs across Europe.

The lost causes, the predictable success stories and everything in between – half probably don’t remember me, a few hate me, too many take my advice too seriously.

I’ve managed dozens of mentors that gave up the time they didn’t have for their families to help the next generation in many of these programs. They obviously also had an opinion on how to manage a business to support entrepreneurs launching businesses.

As a consequence of launching my first FinTech startup 3 years ago, with both the benefits and challenges of having never worked in financial services, I’ve dealt with both people who wasted my energy as well as geniuses that we couldn’t have survived without. And, truth be told, geniuses that could have ended us and other people who we owe being alive to.

I’ve experienced from both sides of the table how mentoring can make or break a startup: in many ways mentoring will shape your startup more than the little cash you’ve got left in the bank. Cash is probably a more urgent issue, but that merits another post.

Like a cult movie, mentors play three roles. It obviously helps if you figure out what shapes potential mentors before they become your companion.

The Good

Good mentors are the worst. The best startups navigate uncharted waters by definition, normally working on a problem no one has solved that way (if at all) before.

Domain experts will view everything from the lense of their (corporate) successes and failures. They are likely to answer the questions from the startup instead of helping them identify the right hypothesis to work on testing.

Serial entrepreneurs will rarely be transparent about the endless times they almost gave up, and rarely realise how much things have changed since they last spent time in a basement. Yet many can boost morale, share great ideas and introduce you to powerful friends. Handle with care.

The Bad

Bad mentors are relatively easy to spot. Conflicts of interest. Lack of hobbies. Ego driven. Trying too hard or not listening at all.

They might to be able to provide marginal value, but they are guaranteed to distract you as long as you allow them to.

Run as fast as you can unless you are out of ideas – in which case you should probably try to get a job.

The Ugly

Ugly mentors are the ones that hurt repeatedly, but that can gauge the urgency of the long list of unsolved problems that can make your startup fail, and help you identify options to continue your quest.

They are the ones that let you know when you are wrong, trying different angles to make you aware of risks you are oblivious to.

They are also the ones that provide data driven emotional support in times of need, the ones that understand that as a CEO one of your toughest challenges is managing your own psychology.

Always keep a few on speed dial. If you can’t find one, try talking to a stranger whose life could be better thanks to what you are up to and learn why he doesn’t care.

Worth it?

So is mentorship worth it? Most definitely.

You have to push your limits and try your luck. Learn and share. Share and learn because at the end of the day, beyond the human need to help others develop by looking at their problems from your perspective, enjoying the intellectual freedom of thinking about challenges that you are not committed to, or the comforting feeling of your opinion having a positive impact on projects that might change the world, the best thing about being a mentor is that it helps you realise whether you are ready to launch another startup or if you’d rather watch from the sideline.

An introduction to the difference between Couchsurfing, Uber, Airbnb, DoorDash, and Etsy

The sharing economy: We all have an understanding of it, but describing it is still a challenge.

We’ve also heard it called many things: “sharing economy”, “collaborative consumption”, “peer economy”, “on-demand”, and even “peer-to-peer marketplaces”.

All the companies placed in these categories have similar attributes: they wed supply and demand. Too often, however, we use the phrases interchangeably when there are actually key differences that should be considered in order to understand how these new categories shape our economy.

The phrase “sharing economy”, most similar to “collaborative consumption” and “peer economy”, suggests an economy based on resources, and not on any abstract system of money. For example, one of the most pure representations of the sharing economy would have to be Couchsurfing, which was founded over a decade ago.

As a host on Couchsurfing, you offer a spare bedroom in your home (or even just a couch) to “surfers”, usually foreigners travelling through the area who need a place to crash. In this case, there’s no exchange of money whatsoever, reflecting a true sharing model.

Yet Uber and Airbnb, not Couchsurfing, are considered the biggest “sharing economy” companies out there, most likely because Airbnb and Uber are valued at $25.5 billion and $62.5 billion, respectively. So where’s the sharing? Someone is either hiring an Uber or renting an Airbnb unit. The only “sharing” piece of the resources used is that the cars and the spaces are owned by individuals and are often underused assets, such as a car, space, and in some cases, a person’s time.

But there’s still money being exchanged. Uber and Airbnb would better be described as “peer economy” companies, because “peer-to-peer” is a decentralized system versus a more traditional capitalist system, where a business owner owns the production and hires the labor. In either case, however, money changes hands.

Further discrepancies arise when you take a closer look at these two peer economy companies. Most obviously, Uber is an “on-demand” service powered by “peer-to-peer labor”, whereas Airbnb is more of a marketplace. One can get a room on-demand, but that’s not a core part of the platform. And there’s no labor component at all.

This differs from Uber, when every Uber call is immediate. It’s an action that demands immediate action.

So what are the other on-demand startups out there that also aggregate labor? Dozens of food delivery companies (e.g. DoorDash and Instacart), household errands and services (e.g. Handy, TaskRabbit), and many others (e.g. Postmates, YourMechanic, Staffly)—these are less “sharing” economy companies, and more “excess labor” companies. In the case of these companies, there are no assets being shared, but services are being provided by a person.

Companies like Breather, WeWork, and Rover, on the other hand, are more like Airbnb, in that they’re marketplaces, with an on-demand component, but not an excess “labor” component.

Finally, there are the peer-to-peer models that are pure marketplaces, including Etsy, Shapeways, Vinted, and Wallapop. For example, Vinted has no “on-demand” component, but it is a flavor of the peer-to-peer model since individuals are buying, selling, and swapping each other’s clothes. It’s basically Amazon for secondhand clothing.

But across all these companies, consumers are still paying, which is why the Harvard Business Review argues we should be calling Airbnb and all its peers (Uber, Lyft, WeWork, Instacart, Handy, etc.) part of an “access economy”, not a sharing economy:

Sharing is a form of social exchange that takes place among people known to each other, without any profit. Sharing is an established practice, and dominates particular aspects of our life, such as within the family. By sharing and collectively consuming the household space of the home, family members establish a communal identity. When “sharing” is market-mediated — when a company is an intermediary between consumers who don’t know each other — it is no longer sharing at all. Rather, consumers are paying to access someone else’s goods or services for a particular period of time. It is an economic exchange, and consumers are after utilitarian, rather than social, value.

While HBR makes a solid point, however, it doesn’t look like their article (published a little over a year ago) will make any inroads in changing how we speak about this new generation of companies. As a phrase and category, the “sharing economy” is here to stay, and it will continue to be used to describe services as wildly different as Couchsurfing (a website where people host strangers in their home for free), Uber and Lyft (apps where you press a button to hail a ride from a company contractor), and Vinted (an online marketplace where people buy, sell, and swap clothing).

My next pieces will expand on the sharing economy divisions introduced above, and will reveal how even “peer-to-peer” and “on-demand” are broad umbrella categories that don’t always mean the same thing in every case.

When a person receives a check and has to wait for two-four days for its clearance, that’s not a good scenario. Many say that they have gotten used to it, but the younger generation is living a fast-paced life; millennials are challenging the status quo. There is a rising need for immediacy in payments whether it is banks, businesses, or even peer-to-peer. Solutions are available in some parts of the world for immediate transfer of funds. We have been tracking real-time payment systems launched across the world on an ongoing basis and have had discussions with people who built them. I thought of sharing it in the form of a timeline infographic to understand the trends.

People and businesses worldwide want payment systems that can achieve the desired speed of transactions, minimize the cost of transactions, reduce risks of fraud and bring satisfaction of service across different channels. That’s where real-time payments come into the picture. It has already been implemented in some countries and the US is not on the list.

Below is an infographic showing a timeline of countries adopting real-time payments:

1973: Zengin (operates 08:30-16:40)

Japan was the first country in the world to implement real-time payments.

1987: SIC

Switzerland was the first country in the Europe to implement real-time payments.

1992: TIC-RTGS (operates 08:30-17:30)

Turkey was the second country in Europe to implement real-time payments.

1995: CIFS

Taiwan was the second the country in Asia to implement real-time payments.

2000: Greiðsluveitan (operates 09:00-16:30)

Iceland was the third country in Europe to implement real-time payments.

2001: HOFINET

South Korea was the third country in Asia to implement real-time payments.

2002: SITRAF (operates 07:30-17:00)

Brazil was the first country in South America and among the BRIC nations to implement real-time payments.

2004: SPEI

Mexico was the first country from North America to implement real-time payments.

2006: RTC

South Africa was the first African country to implement real-time payments.

2008: TEF

Chile was the second country in South America to implement real-time payments.

Faster Payments

The UK was the fourth country in Europe to implement real-time payments.

2010: IBPS

In 2010, China introduced real-time payments.

IMPS

In 2010, India introduced real-time payments.

2011: NIP (operates 08:00-17:00)

In 2011, Nigeria introduced real-time payments.

2012: Elixir Express

Poland implemented real-time payments in 2012.

BiR

Sweden implemented real-time payments in 2012.

2014: Nets

Denmark implemented real-time payments in 2014.

FAST

In March 2014, Singapore introduced real-time payments.

Countries like Singapore and the UK have this service free of charge. Even countries like Nigeria and India are offering such real-time payment services.

In the US, we don’t have something which people enjoy in many countries across the world and that is an opportunity to build free real-time payments for any amount.

NACHA, the electronic payments association, has recently proposed a solution to provide a new, efficient and ubiquitous capability to expedite the processing of ACH transactions. With the new rule, the same-day processing of virtually any ACH payment can be enabled. But it will take some time for the rule to become fully effective. Same Day ACH would become effective over three phases during September 2016 as follows:

In Phase 1, ACH credit transactions will be eligible for same-day processing, supporting use cases such as hourly payroll, person-to-person (P2P) payments and same-day bill pay.

In Phase 2, Same-day ACH debits will be added, allowing for a wide variety of consumer bill payment use cases like utility, mortgage, loan and credit card payments.

Being savvy in the remittance industry is no longer limited to maintaining price competitiveness but has seen its horizon growing in many more dimensions. This industry has faced multiple issues in the past such as high transfer cost, limited money distribution methods, limited brand options, limited ways to deal with money, etc. The advent of new players in this space is redefining the solutions provided to these problems by this industry which was known to be mainly duopolistic in nature till a few years ago.

The infographic below provides a sneak peek at these issues and how they’re being addressed:

The remittance transfer industry is getting disrupted with upcoming models that encompass benefits of cost, customer experience, convenience and brand. These remittance models have evolved from cash transfers > internet banking > mobile wallets > digital payments using mobile money > crypto currency, and we have discussed them all in our articles.

Another major problem to be addressed by the remittance industry are the strict AML regulations which are forcing a number of banks to move out of remittance industry or closing the accounts of firms that specialize in money remittance services . We have seen many of them such as JPMorgan Chase and Bank of America in USA, Westpac in Australia and many more. In this situation, any company that is able to exploit an alternative model to remit money is poised to hold a stronger market position.

Mobile money transfer platforms and bitcoin are emerging as the promising tools to come to the rescue of this situation, but with their own limitations. While the large footprint potential with mobile money transfer platforms is highly dependent on host country’s regulatory environment, cryptocurrency is still in a nascent stage to reach the migrant population masses and achieve its potential benefits.

Here, a significant role can be played by the recently established fast-growing players such as Xoom, TransferWise, and WorldRemit, to name a few. These new companies can benefit over the start-ups on strong balance sheets and significant customer base and over the traditional companies such as Western Union and MoneyGram who might find it difficult to deploy the use of new currency with their existing legacy system.

In this article, we have analyzed three new and growing companies, for different corridors from the USA, to identify how their technology-based business models help them to transfer benefits to the end customer and hence differentiate themselves from established industry giants.

Xoom:

Xoom has positioned and established itself as a technology-driven company which enables instant and cheaper money transfers. For instance, for the USA-Vietnam corridor, the company provides the convenience of all options of cash pickup, bank deposit and home delivery on recipient side while charging a flat fee of $2.99 and $13.00 for payment through bank accounts and credit cards respectively. At the same time, a user sending money to be received through agent via Western Union is charged $5 and $20 for payment through bank account transfers and credit cards respectively.

This accounts for an average 61% higher fee.

Benefiting cost with speed, Xoom transfers take a maximum of two days for transfers through any mode, including deliveries to rural areas, while Western union transfers can take four to six days. Xoom’s model of partnering with existing agent distribution cash-out points enables it to transfer monetary benefits to the end consumer. For a country like Vietnam which has an unbanked population of 42% and $16 billion of remittance industry, such a high difference in fees can be game changer for major players.

WorldRemit:

WorldRemit has developed a significant space in the remittance industry by using its technology-based systems. In some of the most crowded corridors such as USA–Mexico, USA–Philippines and many more, the company is able to achieve highly competitive rates over the industry giants such as Western Union, passing cost advantage to the end customer for smaller amounts of money transfer.

In the USA–Mexico corridor, WorldRemit applies a flat fee of $3.99 for transfers less than $2,000, with an average 0.6% higher exchange rate than Western Union. Transfers are made instantly versus a period of minimum four days taken by other traditional models. Similar benefits by WorldRemit can also be applied by the migrants of the fourth largest remittance recipient country in the United States, Philippines. The company enables instant money transfers with higher exchange rates by an average of 1.3%, also accompanied with a reduced fee by an estimated 44% with regard to conventional players’ fee. This can amount to significant traction to the winning player in this multibillion dollar market.

Boom Financial:

This company is found disrupting the remittance space by addressing major concerns in this space like services to unbanked, convenience of transfers through mobile applications, cost effective services through technology-driven model and multiple distribution modes.

With a $0monthly and membership fee, Boom Financial enables cash deposits at Boom stores or mobile branches for a fee of just $1. Company issues a Boom Visa prepaid card for the first time at $0 and charges an ATM withdrawal fee of just $2 per transaction. Transactions up to $2,999 to Haiti’s Boom account through a mobile phone costs a maximum of $5 to the end consumer. This results in a complete transfer fee of $3 to $7, which can go to maximum of $12 in case money is sent through a Boom agent. Conventional models usually bring along the “cash pick up at agent location” model, charging a fee of $12 to $48, based on the bank account or credit card used as a payment mode. These charges by established players on conventional models translate to 4 to 16 times higher than these newer players with innovative models.

Spend ten minutes walking down almost any street in the world, and you’ll see a familiar sight: a young person walking down the street with Apple’s iconic white earbuds firmly planted in their ears. They might be bobbing along to the beat, or they could be keeping their head down, just trying to get to work through the crowds.

The rise of the iPod, the music-playing mobile phone, and a number of streaming media platforms all point toward a single, simple idea: music is important. It’s important to people on an individual level, it’s important to the human race on a societal level, and it’s hugely influential to the state of technology. The devices we use to listen to music help shape the technological landscape of the day.

But how did we get here?

When did those white earbuds become synonymous with young people and their portable music player of choice, the ubiquitous iPod? Was the iPod the first MP3 player? How did people listen to music before the Walkman? And where did it all start? The answers to these questions tell us not only about the history of music consumption technology, but also about how people have related to music for the past 150 years.

Early Days: The Phonograph

Our story begins, as many do, with Thomas Edison. Before his invention of the phonograph in 1877, music listeners could only listen to their favorite songs when someone else was playing them, whether in a concert hall or at home. Music has been an important part of human culture since prehistoric times (some experts believe music to have emerged 30,000 to 60,000 years ago), but the phonograph completely revolutionized its consumption.

Before Edison’s invention (the second iteration of which is pictured above), some inventors had managed to record music onto physical media, but 1877 saw the first machine that could both record music and play it back. Sounds to be recorded were transmitted through a recording stylus, which would create indentations on a round phonograph cylinder, and a playback stylus could read the recording and play it back through a diaphragm and the iconic horn.

The phonograph cylinders themselves were interesting devices — Edison started with one that consisted of tinfoil wrapped around a metal cylinder. Almost a decade later, a group of researchers and engineers that included Alexander Graham Bell created a phonograph cylinder made of cardboard covered in wax that could be engraved with recordings.

Around the same time, Edison created an all-wax cylinder that could be shaved down to record new sounds (this cylinder could be considered the ultimate precursor to the CD-RW). And in 1889, pre-recorded cylinders hit the market. Over time, the wax used for the cylinders was hardened, increasing the number of possible playbacks from a dozen or so to around a hundred times.

In the 1890s, the transition to using flat-disc records began. The recording was etched onto a disc that would be recognizable even today as a record. Interestingly, the dominance of the record over the phonograph cylinder didn’t come down to audio fidelity: the main advantage of the disc record was that it could be more easily mass-produced. By creating a master stamp, a number of records could be stamped in a short period of time, whereas each phonograph cylinder had to be recorded individually, significantly slowing the process.

Discs were first released in a five-inch version, then in a seven-inch, a ten-inch, and finally a 12-inch version in 1903. Around this time, interest in double-sided records started to pick up, and Edison realized that the cylinder was dying. He shortly switched to the Edison Disc Record (seen below), a 1/4-inch-thick piece of shellac that could only be played on Edison Disc Phonographs.

Shellac, the standard material of the day, wouldn’t be replaced by vinyl, a lighter and more durable material, until after World War II. After a decrease in sales during the war, record sales got a big boost, with more and more families having phonographs with automatic record changers in their homes.

The transition to vinyl also coincided with the change of the industry standard from 78 rpm to 33 1/3 rpm, which allowed a much larger amount of music to be recorded on a single disc. A 10-inch, 78 rpm disc (the most popular size for a number of years) could only contain about three minutes of music, so long songs or collections were often split across a number of discs, each of which was contained in a sleeve that was bound into a book format with the other sleeves, leading to the term “record album.”

A 12-inch, vinyl, 33 rpm record, however, could contain around 20 minutes of music on each side, and this longer-playing format began to dominate the market (it retained the “record album” moniker, as well as gained the title “LP” for “long-playing”). 45 rpm records also increased in popularity after the war, with most containing a single song on each side, earning them the name “singles.” Extended-play (EP) 45s were also introduced, each of which could contain two songs on each side.

Beyond this point, the changes in record players were mainly in the hardware used to turn the disc and relay the sound — belt- and direct-drive turntables, balanced tone arms, better styli, and so on. These innovations continue today with brands like Gemini and Stanton. The now-discontinued Technics SL-1210 is on display at the London Science Museum and described as one of the pieces of technology that have shaped the world we live in.

Taking to the Air: Radio

Although radio technology had been around since the early 20th century, it wasn’t until later that music started to hit the airwaves. The early history of music radio is murky, but a college radio station in the San Jose area is purported to have broadcast music between 1912 and 1917, though it didn’t start broadcasting daily until later.

During World War I (as well as World War II), the US Congress suspended all amateur radio broadcasts, meaning that many stations went off the air permanently. But 1XE of Medford, Massachusetts, was broadcasting music in 1919, shortly after the end of the war, and in the following years, more music radio stations began to pop up.

Unfortunately, they met with some resistance: many people believed that radio should only be used for two-way communication, and a New York station was even shut down by a federal inspector who stated that “there is no place on the ether for entertainment.” If only he could see the means by which we transmit music today.

Commercially licensed stations started appearing around this time as well — Pittsburgh’s KDKA is arguably the first, with its inaugural broadcast of presidential election results in October 1920. After that, the popularity of radio exploded: between 1920 and 1930, a reported 60% of American families purchased radio receivers, and the number of families with radios more than doubled during the 30s, ushering in the golden age of radio (usually characterized as lasting from the 20s to the 50s).

In the early days, music wasn’t the only thing broadcast on the radio — in fact, a number of stations only started broadcasting music after they had been on the air for a time. News, sports scores, voting results, soap operas, lectures, weather reports, comedians, political commentaries, and stories could all be heard on the airwaves. Chicago’s KYW broadcast opera six days a week, and didn’t start broadcasting popular and classical music until the opera season was over and more programming was required.

1922 saw the first appearance of something that would change the future of music broadcasting: the first radio advertisement. One has to wonder how surprised the people at AT&T, the company who paid for the ad, would be if they could see the future of advertising on Internet radio and streaming services. They probably had no idea what they were starting.

Before it was considered acceptable to advertise on the radio, companies would sponsor musical programs, which had names like Champion Spark Plug Hour, Acousticon Hour, and King Biscuit Time. Classical music was often broadcast live, a practice that still survives on a very small scale today. Country music also became more popular during the 1920s and 1930s, with a number of popular country shows being broadcast.

During this time, the standard format for radio stations was the full-service format, which saw the station broadcasting not only music and other types of shows, but also news, weather reports, talk shows, and a wide variety of other things of interest to the local public. This could be mixed with a network broadcast, much like is done on public radio stations today.

The development of popular music is often attributed to the radio, and the rise of the top 40 stations in the early 50s has influenced how music radio operates even today. By allowing radio stations to run with less space, equipment, and staff than full-service stations, top 40 stations quickly became the norm, especially after higher-fidelity magnetic recording made it feasible to broadcast pre-recorded programs in the 1940s (before this, most radio shows were broadcast live for better sound quality).

Another significant development in radio technology also took place around the middle of the century: the invention of the transistor. After its invention in 1947, it was quickly integrated into radios, allowing them to be made smaller and portable, instead of the large, stationary ones typically associated with the golden age of radio. In the 60s and 70s, billions of these radios were built, making easily portable music a reality.

Taking It with You: The Tape

In 1958, RCA would change the future of home music consumption by introducing the RCA tape cartridge (pictured to the right of a later compact cassette below). Before this cartridge, magnetic tape wasn’t a realistic option for home use, as reel-to-reel players were too complicated for consumers, especially compared to record players, which had been the de facto standard for home listening for several decades.

This was also the first time that acceptably high-quality audio had been encoded onto a magnetic tape medium for home use. Although the RCA tape cartridge introduced the possibility of 60 minutes of high-quality home listening on magnetic tape, it wouldn’t prove to be a success — it disappeared from shelves by 1964, largely due to low sales of players caused by a hesitance on the part of retailers and hi-fi enthusiasts to adopt the technology.

A number of competing systems tried to gain market dominance through magnetic tapes, but it wasn’t until 1964 that home audio would unite around a new format: the 8-track tape. Bill Lear, of the Lear Jet Corporation, along with representatives from Ampex, Ford, General Motors, Motorola, and RCA worked together to improve the technology that had been previously developed for the 4-track tape, which itself had been an improvement on the 3-track model.

Other tape formats had already been available in the home market for a number of years, but the inclusion of 8-track players in many cars of the 60s and 70s led it to become the dominant format of the day, despite its initial 46-minute play-time format.

By the late 60s, all of Ford’s cars were offered with an available 8-track player as an upgrade, and hundreds of tapes were released, with the catalogue soon rivaling that of vinyl. And while other 8-track formats came and went, the Lear tape held fast as the most dominant one. Although the reign of the 8-track was brief (it had been supplanted by Phillips’ compact cassettes by the late 70s), it remains an iconic music storage method.

Once Phillips proved that their compact cassette tapes could carry high-fidelity musical content in the early 1970s, they began a quick rise to domination of the automobile music market. Their small size was a big count in their favor, as smaller tape decks in cars and homes were advantageous — even soldiers in Vietnam appreciated the smaller size and greater portability of the medium.

Once manufacturers started making smaller, portable tape decks, the cassette’s place in music had been cemented. Portable stereos became more feasible than they had been when the 8-track was the standard format, and adoption by the automobile industry ensured a quick rise in popularity.

An innovation possibly even more important than the cassette itself, however, was released by Sony in 1979: the Walkman. The introduction of the tiny portable stereo tape player helped even more of the music-listening public accept tapes as a viable home and personal music medium.

The Walkman, originally released as the Sound-About in the US, the Stowaway in the UK, and the Freestyle in Sweden, fundamentally changed how people listened to music; no longer tied to large home record players or large, inconvenient portable tape decks, listeners could easily take their music with them wherever they went. And because the first Walkman included two headphone jacks, music could be enjoyed with a friend.

In 1983, cassettes outsold vinyl for the first time, largely thanks to the Walkman and similar devices from other manufacturers. Continued innovation brought AM/FM radios, bass boost, rechargeable batteries, and auto-reverse to the Walkman, which continued to present a sleek face throughout a large number of iterations throughout the 1980s and early 1990s.

The Walkman name is so iconic that it’s been used for a range of devices, from cassette players to CD players to video MP3 players, and is still in use today.

The Digital Age: Compact Disc

Although digital recording had been happening since the late 1960s, it wasn’t until the early 1980s that the first commercial compact discs (CDs) appeared. Although discs closely resembling the eventual format had been demonstrated by companies in the 70s, the format of the CD was standardized in 1980, making it much easier for manufacturers to get into the business.

Before the CD, magnetic tape data (or the track on a record) was read mechanically, with a sensor turning a magnetic or physical pattern into an electrical signal. The use of a laser to read the data encoded on the disc was a huge leap forward in audio technology — the laser was bounced off of the disc, and the reflections were read by a sensor which transmitted an electrical signal.

In 1981, ABBA’s The Visitors became the first popular music album to be pressed to CD, which was quickly followed by the first album to be released on CD, Billy Joel’s 52nd Street. Since then, musical releases have almost always included a CD release, with the format dominating the market in the late 80s, through the 90s, and into the early 2000s.

Error correction was built into CDs from very early on, one of the factors contributing to the format’s popularity with audiophiles. Although diehard LP fans still praised vinyl (especially in America, where resistance to the CD was a bit more entrenched than in Europe), the ability for a CD player to dampen the effect of a scratch or fingerprint was a huge leap forward in audio technology.

And the skip protection introduced into later players would further enhance the listening experience by storing a few seconds of music ahead of time so that playback would continue uninterrupted in the event of a skip.

By the late 80s, CDs had exploded in popularity, with the cost of CD players coming down and an increasingly large number of artists converting their back catalogs to the new digital format. The 60-minute playtime of a CD combined with the high audio quality offered, as well as the reading laser’s resistance to interference by dust or other particles, made the CD the primary musical medium for the next decade, with home and portable players quickly being adopted by listeners.

Although the CD has remained relatively unchanged in its lifetime, there were a number of slight changes to the format through the years. In 1983, the first experiments with erasable discs were revealed, paving the way for the later CD-RW (re-writeable), which superseded the CD-R (recordable) in the mid-90s. The cost of both the CDs and recorders able to write to them fell quickly, making these discs, at least temporarily, ubiquitous.

CDs also made a big splash in the computer industry, with CD-ROMs (CD-read only memory) debuting in 1985. Further refinements led to the creation of the Video CD, Super Video CD, Photo CD, DVD, HD DVD, and Blu-Ray discs.

The CD’s musical reign didn’t go unchallenged, though. In 1992, Sony announced the MiniDisc, a magneto-optical storage medium that combined the storage systems of magnetic tapes and optical CDs. Sony hoped that the smaller size and significantly better skip resistance would help the MiniDisc transcend the CD as the musical medium of the day, but it was not to be.

A few audiophiles criticized the near-CD-quality audio, and the MiniDisc also suffered from a lack of players and pre-recorded albums, the drastic fall of blank CD prices, and most notably, the emergence of MP3 music players.

Although the MiniDisc did have some advantages over CDs, it suffered from poor timing, with the solid-state revolution quickly making them obsolete. Sony stopped making MiniDisc Walkman players in 2011 and all other MiniDisc players in 2013, totally killing off the medium (though some diehards still defend it).

Electronic Music’s Teething Years: The First MP3s

The history of MP3 is a fascinating one. It began in 1982, when Karlheinz Brandenburg was an electrical engineering PhD student at Friedrich-Alexander University Erlangen-Nuremberg. His thesis advisor issued him a challenge: find a way to transmit music over digital phone lines.

1986 saw the first real progress on the project, when more advanced technology was used to separate sounds into three sections, or “layers,” each of which could be saved or discarded depending on its importance to the overall sound. Brandenburg and his colleagues took advantage of a bit of a psychoacoustic phenomenon called auditory masking to compress the file size of the recording.

Auditory masking is what happens when the human ear is unable to hear certain sounds; louder sounds or those with lower frequencies can mask other sounds, meaning that the obscured sounds can be discarded from a recording without a noticeable loss in quality. This led to the ability to encode files with decreased bitrates, resulting in smaller files that retained an acceptable amount of the quality of the original sound.

The Motion Picture Experts Group (MPEG), a group tasked to create worldwide standards in audio recording, was created by the International Standards Organization (ISO) in 1988. The standard that MPEG created included Layers I, II, and III, the latter giving the highest quality at low bitrates.

Work on digital encoding continued, but ran into problems, with voices being recorded in very low fidelity. After further experimentation with psychoacoustic models and data codecs — and a close call wherein the encoding simply stopped working two days before submission of the codec — MPEG-1 Audio Layer III was finalized in 1991.

MPEG-1 Audio Layer III (and MPEG-2 Audio Layer III, an improved format standardized in 1998) is a lossy audio data compression format, meaning that every time the digital file is uncompressed and recompressed, more information is lost. The compression algorithms for MP3 take advantage of limitations in human hearing to discard sounds that are not well perceived — or not perceived at all — by the human ear, resulting in very small music files compared to more robust lossless algorithms.

As the technology advanced, the encoding algorithms became more complex, allowing for things like average and variable bitrate encoding, in which more complex parts of the audio are recorded at a higher bit rate than less complex ones, resulting in higher-quality sound.

After realizing that this new format could be of great use to the growing Internet, Brandenburg and MPEG decided on a file extension in 1995: .mp3. It was around this time that Brandenburg was asked a telling question by an English entrepreneur: “Do you know that this will destroy the music industry?”

Looking back on this conversation, it seems likely that neither of them had any idea to what degree this would be true. But it didn’t take long for it to become clear just how big of a shakeup this would case.

In the mid-90s, MP3 decoding software was cheap — WinAmp, one of the most widely downloaded Windows programs of the era, was free (though it went to a freemium model where extras could be paid for). But encoding software was expensive, and formed the center of the business model.

(If the image above makes you yearn for pre-iTunes days of digital music, try making a portable version of WinAmp for a USB drive!)

Unfortunately for Fraunhofer and the music industry at large, an Australian student bought professional-grade encoding software with a stolen credit card in 1997 and distributed the core of the software as freeware. Brandenburg told NPR that it was in 1997 that he “got the impression that the avalanche was rolling and no one could stop it anymore.”

The inevitable rise of peer-to-peer music sharing resulted in one of the most infamous companies of the Internet age: Napster. Although it was only around for two years, the invention of Shawn Fanning, John Fanning, and Sean Parker shook up the music world like no other piece of software before or since. Napster was a simple, free peer-to-peer (P2P) file-sharing service; it wasn’t the first, but its focus on MP3 sharing catapulted it to almost 25 million verified users in February of 2001.

Although it was used by a wide variety of people, Napster is often associated with college students of the day; a number of universities blocked the service from their networks, and those that didn’t reported huge amounts of traffic — according to one 2000 article, some administrators reported between 40 and 61 percent of the traffic from their college networks going to Napster.

Of course, this free distribution of music couldn’t last long before it was attacked by the music industry. The first big attack on Napster came from thrash gods Metallica in 2000: after they discovered that their single “I Disappear” had been leaked to Napster before it was released, and had even made it to radio, they filed a lawsuit against the service under the Digital Media Copyright Act. Dr. Dre quickly followed suit (no pun intended). The Recording Industry Association of America also filed a suit. As a result of these lawsuits, Napster shut down in 2001 and declared bankruptcy the following year.

Despite the attack and subsequent quick death of Napster, many other P2P file-sharing services sprung up. If you were in your late teens or twenties in the early 2000s, you almost certainly remember LimeWire, Kazaa, Madster, or Scour Exchange. It was not a good time for these services, and many of them were shut down with similar lawsuits.

Of course, P2P music sharing still exists today, with BitTorrent being one of the most popular formats in use — especially because of its decentralized format, which can’t easily be shut down. BitTorrent trackers, however, which help users find each other and the files they’re looking for, can and have been targeted by lawsuits and taken down.

By this point, MP3s had moved off of the computer and into listeners’ pockets. Different sources have different opinions what the first MP3 player was, but Audio Highway’s Listen Up MP3 player, released in 1996, is a good bet. The 1997 MPMan, released by Saehan Information Systems, followed close on its heels.

These were relatively rudimentary systems by today’s standards, holding six to twelve songs and displaying the current song on a small screen. The Diamond Rio, Archos Jukebox, Creative Nomad Jukebox, and a few others were released in the following years, but the market destroyer was yet to come: the Apple iPod, in 2001.

The first generation iPod was a monster, containing a 5 GB hard drive that held up to 1,000 songs and selling for $400 (interestingly, the first phone with MP3 capabilities, the Samsung SPH-M100, was launched the year before, in 2000). The mechanical scroll wheel and five-button layout became synonymous with MP3 players very quickly due to the iPod’s popularity; its small size helped catapult it to the forefront of the MP3 player scene.

Over the next 14 years, continuing through to today, the iPod has gone through a large number of iterations, and seeing a significant decrease in size and weight, the introduction of touch-control scroll wheels, color screens, and a huge jump in available storage; the final iPod Classic (as it was called) had 160 GB of storage, 32 times that of the original.

In the years after the release of the first iPod, we’d see the release of a number of other models, including the iPod Mini, iPod Shuffle, iPod Nano, and iPod Touch. Other significant MP3 players would be released during the reign of the iPod, but none have come close to eclipsing the market dominance of Apple’s sleek little player.

The Siri personal assistant, Retina display technology, cameras, video recording, voice control, Bluetooth, and Wi-Fi connectivity were all added over the years, continuing Apple’s history of innovation. In September 2012, Apple reported that 350 million iPods had been sold worldwide.

Of course, where goes the iPod, so goes iTunes. It’s no surprise that iTunes debuted in 2001 alongside the iPod as “the world’s best and easiest to use ‘jukebox’ software.” More important was the release in 2003 of iTunes 4, which included the iTunes Music Store, Apple’s entry into the music sales business.

The ability to purchase a song or an entire album with a single click was obviously very appealing to users, and has remained so — iTunes has been the single largest distributor of music in the United States since 2008, and the largest in the world since 2010, despite some backlash over digital rights management and a few legal disputes.

Despite the technological superiority and overall popularity of the MP3 format, the iTunes Store doesn’t use this encoding; the songs sold are now encoded in Advanced Audio coding (AAC) format, which was standardized in 1997 and intended to be the successor to MP3. With more advanced encoding, higher quality at similar bit rates, and more flexibility, AAC is a superior format. Although MP3 is still prevalent, AAC has more industry support, and will likely completely replace MP3 in the near future.

The Streaming Revolution: Pandora

Though the title of “first music streaming service” isn’t easy to bestow, Pandora easily takes the “biggest early music streaming service” label. Launched in 2005, it pioneered the style of music recommendation service that would grow to become one of the biggest trends in modern music.

Five years before Pandora became a reality, the Music Genome Project was founded in an attempt to “capture the essence of music at the most fundamental level.” Pandora is the “custodian” of this project, which assigns values for up to 450 musical characteristics per song, depending on the genre: 150 for rock and pop, 350 for rap, 400 for jazz, and up to 450 for other genres, such as world music and classical.

These characteristics are assigned by human analysts, about 25 of which are working at a given time, coding two to four songs per hour, for about 10,000 songs per month. This information is fed into an algorithm to allow a user the ability to listen to songs that are similar to a given song, album, or artist (or, in the case of iTunes Radio, an entire music library).

Serving as a discovery engine, this technology has introduced millions of listeners to thousands of bands across the world and opened up a huge range of previously unavailable listening experiences. Of course, there are some criticisms of Pandora’s recommendation engine, including a degree of homogeneity, especially after Pandora introduced the “thumbs-up / thumbs-down” rating system. Continued rating of songs would eventually create a very small pool in a particular channel that a user had created.

But this didn’t stop people from using the service — in April 2013, Pandora had 200 million users, and after their IPO in 2011, they were valued at $2.6 billion. Much of their revenue comes through ads placed on the service that listeners hear between songs, and is supplemented by an ad-free premium plan.

Pandora’s rise to prominence wasn’t easy; the idea of a service that allows listeners to hear music from tens of thousands of artists without buying a single album is an understandably controversial one. Pandora and other music streaming serviceshave faced nearly constant battles over royalties paid to artists, usually with artists demanding higher royalties. With rights holders earning — at most — cents per play, it takes a very large number of plays through a streaming service for the artist or record label to make any money.

However, the degree to which artists suffer from lost album sales is debatable; Tim Westergren, founder of Pandora, stated in 2012 that a few artists are receiving payments of $1 million annually, with a couple pulling in closer to $3 million. He also said that 2,000 artists would receive payments of $10,000 or more, while 800 would receive $50,000 or more. With Pandora’s reported earnings in the second quarter of 2012 alone hovering around $100 million, however, most artists still weren’t happy.

The battle over royalties and online music streaming certainly wasn’t limited to Pandora; Spotify, an increasingly popular online music streaming service, has faced its own difficulties with artists’ dissatisfaction with royalties—including a rather heavily satirized Taylor Swift.

Swift’s very public act of pulling her music from Spotify certainly helped the service gain exposure in the public eye — last year, a poll published by Fortune showed it as the fourth most-used streaming service in America, behind Pandora, iHeartRadio, and iTunes Radio. This poll showed about five times as many users listening to Pandora as it did Spotify, though this gap is likely to have closed a bit since then.

In December 2013, Spotify released data on how much rights holders were paid per play of their songs, with the amounts surprising many people with how low they were: on average, rights holders received between $0.006 and $0.0084 per play. That’s less than a cent per play, which means hundreds of thousands or millions of plays are required for any meaningful earnings. Pandora, according to one 2013 report, pays $0.0012 per play to record labels, and $0.0002 to artists, which means an artist earns $200 per 1 million plays. No matter how you figure it, that’s not very much.

Of course, there’s a lot at stake here. Artists deserve to be paid for their work, especially when it’s their music that’s bringing millions of listeners to Pandora, iHeartRadio, and other online streaming services that are making millions of dollars on the ads that they sell. Artists and record labels work hard, and the astronomical rise in popularity of streaming has put a big damper on CD sales.

Within the past few years, online streaming has surpassed digital music sales, adding to the worries that allowing listeners to access music for free (or very nearly free; a premium subscription to Spotify is only $10 per month) will destroy the music industry, as the unnamed English entrepreneur predicted to Brandenburg when he showed off the MP3. Whatever the reason, album sales are tanking, with totals hitting record low numbers, and online streaming is taking off.

Whether or not the increase in streaming can make up for the drop in album sales isn’t clear; a Complex article from earlier this year presented some figures suggesting that the money made from billions of song plays via online streaming may have helped the record industry break even in 2014, but they don’t provide a comprehensive picture. It’s a tough one to get.

It’s clear, however, that streaming services are doing well: not only are Pandora, iHeartRadio, iTunes Radio, Spotify, Google Play All Access, Rhapsody, Slacker, and TuneIn Radio prospering, but new names are entering the market as well.

YouTube is working on a premium streaming service, and the very loosely Dr. Dre-affiliated Beats Radio recently debuted. Jay-Z recently launched another service called Tidal. No matter what you have to say about listeners, artists, and the music industry, being in the streaming business is a good way to make money.

At the moment, it seems that the system is precariously balanced, with some artists appreciating the publicity that they’re getting from online streaming services, and others — as well as record labels and other industry members — not happy about the fractions of pennies that they’re making per play. Despite there being no clear alternative to this model, it doesn’t look likely to last long in its current incarnation; there are just too many people on both sides who are unhappy with it.

One group that’s very happy with it, however, is listeners. The ease with which users can access tens of millions of songs in a matter of seconds is extremely appealing to a wide variety of types of listeners, from the most casual to the most committed.

A shocking statistic that shows just how good online streaming is for listeners is that music piracy in Norway has actually gone down by an astonishing 80 percent since streaming became more popular. And if online streaming is better and easier than piracy, it’s clear that it’s good for listeners, probably to the point of being bad for everyone else.

Besides being able to quickly access a monumental amount of music, of course, the biggest advantage of this format is that it doesn’t require terabytes of hard drive space to store it all. Being able to stream it from the cloud and download a few albums at a time for mobile listening is very space efficient –this was not the case in the days of Napster and LimeWire, in which users had to download all of their music, which both took a long time and required a huge amount of space.

The Future

So if streaming loses the crown as the most widely used form of music listening, what will take its place? To put it simply, I have no idea. When I was pondering this question a few days ago, I thought that music technology could advance so drastically within the next five or ten years that we wouldn’t even be listening to artists anymore.

We’d plug ourselves into machines that would take our tastes and procedurally generate new music that would perfectly fit what we like in our music libraries, much like video games are using procedural generation. While some people are experimenting with procedural music, it doesn’t look like anyone has tried to take that next step into complete customization.

However, there are a few signs that I might not be too far off here. Janel Torkington, in an article on the future of music listening, points out that no matter how many tracks we have available on Spotify or Beats Radio, we still have to make decisions on what we want to listen to.

Which means that our current way of listening to music is actually harder than it is to listen to the radio, which is still strangely popular among Americans — 91 percent of Americans listened to AM/FM radio in 2013. This suggests that a lot of people are looking for not only music that they like, but an easy listening experience that allows them to simply consume and not create.

Which is why people like Paul Lamere are looking into “zero-UI” music players. These players would ideally require no interaction from the listener whatsoever — they would use a wide range of information made available to them (demographic information; Facebook and Twitter posts; music library information; details on which songs were playing when the user turned up the volume, skipped a track, or abandoned a listening session; the current activity the user is taking part in, from walking to working to working out) to generate a highly targeted playlist that not only works with the users’ taste, but also their context. While many of us music aficionados might be horrified by this idea, Lamere makes a strong case for the fact that this sort of system would be perfect for the majority of music listeners.

Spotify has also looked into similar ideas — a Guardian article from last year reported that they were looking into ways to incorporate heart rate, motion, temperature, and sleep patterns to figure out what the listener is doing and what sort of music they might like to hear. It’s obvious that Spotify is serious about this, as they bought The Echo Nest, the company headed by Lamere. The Echo Nest is an innovative “musical intelligence platform” that powers many discovery engines and other recommendation applications. Just where their partnership with Spotify will lead is anybody’s guess, but it’s a safe bet that it will be pretty cool.

Then again, we’ve seen what happens when technology goes further and faster than we’re ready for: look at the advent of tiny cell phones, and how quickly we went back to full-size phones and even huge phablets. Look at how small earbuds got before we decided that the big, booming sound of a pair of Sennheisers or Bose over-ears was better. Recommendation services may be the hot thing of the day, but I wouldn’t be surprised if we go back to the radio era, or even the mixtape era, before we progress further.

There’s something special about picking out an album that not only contains one of your favorite tracks, but has the perfect balance of up-tempo tracks and soulful melodies; that hits the sweet spot of combining builds-ups and breakdowns, catchy hooks and crushing riffs. The art of putting an album together is still alive and well, even if it’s a bit under appreciated at the moment. Its day very well may return.

Conclusion

The history of music consumption is a long one, and spans almost 150 years. The history of music, and music performance, is a lot longer, with some philosophers believing that music is one of the the defining characteristics that makes humans different from lower-order animals.

Music has played a role in how we celebrate, worship, communicate, design, and build for centuries, and it will likely remain one of the powerful tools in the human cognitive vocabulary. Music is a powerful thing, and the way in which we relate to it has changed as we have evolved and become more advanced as a species.

Of course, there have been some big changes; music has been affected by the way in which we consume it, and the way in which we consume music has been a strong defining force — the change from the gramophone to the tape, the advance from the CD to the MP3, have been notable technological paradigm shifts in human history.

The future of music consumption is a question mark at the moment — further algorithmic control looks likely, but the exact degree to which it will come to rule our listening experience remains unknown. And regression back to full control certainly isn’t out of the question. The only sure thing is that music — in one form or another — will be with us forever, whether we go back to listening to vinyl on home players or we find a way to implant individualized algorithmic composers and players directly into our brains.

From the gramophone to the FM radio, from the Walkman to the MP3 player, from the anarchy of Napster to the algorithmic rule of Pandora, the systems through which we relate to music today only bear a passing resemblance to the ways we listened 100 years ago. And yet, we still listen. Whether music is part of what makes us human, an evolutionary quirk, an escape mechanism, or a distinctly advanced way of relating to our environment, it’s here to stay. And we’ll continue to innovate, challenge, and completely change the ways in which we consume it.

PARKLAND, Fla. (Reuters) - A teenager accused of fatally shooting 17 people at a Florida high school was investigated by police and state officials as far back as 2016 after slashing his arm in a social media video, and saying he wanted to buy a gun, but authorities determined he was receiving sufficient support, newspapers said on Saturday.

WASHINGTON (Reuters) - A Russian propaganda arm oversaw a criminal and espionage conspiracy to tamper in the 2016 U.S. presidential campaign to support Donald Trump and disparage Hillary Clinton, said an indictment released on Friday that revealed more details than previously known about Moscow's purported effort to interfere.

WASHINGTON (Reuters) - Former Trump campaign chairman Paul Manafort has drawn a new accusation of bank fraud from U.S. Special Counsel Robert Mueller's office, according to court documents made public on Friday.

MIAMI (Reuters) - Student survivors of a mass shooting that killed 17 people at a Florida high school called for gun restrictions on Saturday during an angry and somber rally, but attendees at a nearby gun show said firearms could not be blamed for the massacre.

WASHINGTON (Reuters) - Facebook Inc will start using postcards sent by U.S. mail later this year to verify the identities and location of people who want to purchase U.S. election-related advertising on its site, a senior company executive said on Saturday.

SAN FRANCISCO/WASHINGTON (Reuters) - The Russian influence operation designed to tamper with the 2016 U.S. presidential election used a combination of old-school espionage tactics and 21st-century technologies that will not be easy to stop, even now that the methods have been exposed, experts said.

MAGNITOGORSK, Russia (Reuters) - Unknown hackers stole 339.5 million roubles ($6 million) from a Russian bank last year in an attack using the SWIFT international payments messaging system, the Russian central bank said on Friday.

BRUSSELS (Reuters) - A Belgian court threatened Facebook on Friday with a fine of up to 100 million euros ($125 million) if it continued to break privacy laws by tracking people on third-party websites.