Paradigma Digitalhttps://en.paradigmadigital.com
Technology for businessMon, 21 Jan 2019 15:27:24 +0000en-UShourly1https://wordpress.org/?v=4.9.9Technology for businessParadigma DigitalTechnology for businessParadigma Digitalhttps://en.paradigmadigital.com/wp-content/plugins/powerpress/rss_default.jpghttps://en.paradigmadigital.com
Who is in Charge in a Blockchain Network?https://en.paradigmadigital.com/techbiz/who-is-in-charge-in-a-blockchain-network/
https://en.paradigmadigital.com/techbiz/who-is-in-charge-in-a-blockchain-network/#respondMon, 21 Jan 2019 15:27:24 +0000https://en.paradigmadigital.com/?p=2391If you are starting to get into blockchain, you will very likely have asked yourself the following question: Who decides what gets written in the network? Who validates the transactions in a blockchain network?

Well, the most correct answer is ‘it depends’. Each blockchain network decides what the transactions are by using what is known as a consensus algorithm.

What is a consensus algorithm?

Consensus in blockchain networks refers to the process of reaching an agreement among the various participants in the network about the transactions that are going to be written in the chain.

In other words, the consensus is responsible for all nodes in the blockchain network having the same data, which in turn prevents tampering.

The way in which this is represented in the network is what is known as a ‘consensus algorithm or mechanism’ and is part of the former’s core – even though there are some network implementations that allow choosing among several available algorithms.

Below we will see the main consensus mechanisms existing nowadays.

Proof of Work (PoW)

The proof-of-work algorithm was the first of the mechanisms used in blockchain. It dates back 20 years, when Hashcash, the first-ever PoW system, appeared.

The basis for this algorithm is that, in order for two strangers who are going to participate in a system to establish a relationship, it is necessary for both of them to somehow show their interest in doing so, and the way to guarantee this interest is for them to prove that they have allocated a certain amount of resources to this end (the proof of work).

However, not just any proof of work is valid since, in order for a proof of work to be feasible, it must be easy to prove.

An example of a proof of work is shown below:

Finding a page in a book where the fourth letter is an ‘a’ and the second to last an ‘m’.

In order to find it, I would need to check all pages one after the other to see whether the above condition is fulfilled.

I could start at the beginning, the end, the middle, etc. There is no strategy beyond checking pages, which would indeed require some effort on the part of the person who wanted to provide that proof.

In addition, it would be something rather easy to check since, once the interested party shared the number of the page, the other party would only have to go to that page to see whether it was actually so.

The idea behind the proof of work about a data set is to apply a function to the data and check to see whether a previously agreed condition is met.

The most famous proof-of-work algorithm is that of Bitcoin. This algorithm creates a block that contains certain information, the following among which:

The address of the preceding block.

The transactions to be included in the block.

Here is where the key of the proof of work resides. A random value is entered in this field such that the block will change with every different value despite containing the same transactions.

A SHA-265 hash function is applied to this ‘block’ (which is a succession of bytes), the result of which is always a collection of 64 hexadecimal digits. It is important to note that a hash function is a deterministic function, that is, it will always produce the same result when it is applied on a specific input.

Once the result is obtained, the algorithm checks whether the calculated number is smaller than a given number by checking to see whether the result starts with a certain number of zeroes (0); if so, it will consider the block to be valid and stop looking for new ‘nonce’ values.

In the Bitcoin and Ethereum networks – and in PoW networks in general, the first node that finds a valid block is rewarded for the work done, so the members of the system compete with each other.

Pros of PoW

The proof of work cannot be predicted as it depends on the data themselves and the address of the previous block.

The sharing out of the reward among the nodes for taking part is automatic and depends exclusively on the miner’s hash rate.

It encourages good behaviour. Since the resources (materials, time and electricity) cost a lot, malicious behaviours are penalized with a loss of profit.

It is resistant to Sybil attacks where an agent creates false nodes to suppress actual honest users.

The proof cannot be obtained in advance since the appearance of each new block requires launching a new round of calculations (each new block contains a link to the previous block).

The system is fairly clear and easy to understand.

Cons of PoW

It uses up a lot of power. The Bitcoin network uses huge amounts of power. As new blocks are generated, the difficulty has to be increased, so this consumption tends to be ever greater.

It is vulnerable to a 51% attack. In order to be able to fraudulently modify the chain, it is necessary to have 51% of the nodes.

At the beginning this was not a risk due to the wide distribution of the nodes; however, Bitcoin mining is becoming increasingly specialized, with groups of users investing in complex mining systems.

This entails a risk of centralization around a small group of users who could potentially carry out an attack on the chain, such that only those blocks this group wanted would be accepted.

The speed of block validation is relatively low owing to the proof-of-work mechanism itself, so certain use cases are incompatible with this system.

Proof of Stake (PoS)

The purpose of the proof-of-stake algorithm is to put an end to the inefficiencies associated with the proof-of-work algorithm. In this mechanism there are two node types: Regular nodes, which store a copy of the chain and which can be queried, and validator nodes (not miners).

The probability that a participant has of being selected to validate a block – and getting a reward for it – is decided in connection with the amount of funds (cryptocurrencies or tokens assigned by means of some specific system) they are willing to bet.

These funds will then become a guarantee of the validator’s good faith. If the transaction is found to be illegitimate, the validator will lose their funds; otherwise, they will receive a reward for validating the node.

The idea behind this algorithm is widely accepted, there being different variations that aim to optimize the results, thus preventing e.g. a few nodes with a lot of funds from becoming the only nodes with the capacity to decide.

However, the need to know the identity of the validator nodes and the large power these nodes have compared to the rest render this protocol quite unsuitable for public networks.

Pros of PoS

Reduced power consumption. Since there is no proof of work, the only work that is done is building the block and reaching an understanding among several nodes.

Very high block rate, which allows using blockchain networks in use cases requiring a quick response.

Quicker decision making. Given that the main players get additional votes, it takes less time for a consensus to be reached, which makes the transactions faster.

Cons of PoS

There is a tendency for the network to become centralized, particularly in the case of networks which are not very large and in which it is easy for a node to end up having a big influence on the decision-making process.

It is susceptible of suffering a 51% attack – although this is complicated since the nodes with the most power are precisely those which are the most interested in the good operation of the network. Since they are the biggest owners of funds, a devaluation thereof would be counterproductive to them.

Exposing the wallet over the Internet can represent a security risk.

Forking could occur, given that two nodes could meet the block validation conditions at the same time. The network tries to mitigate this risk with a subsequent validation of the remaining nodes, but potentially it could be happen that both branches end up being equally validated.

Variations of PoS

Some of the most important variations of the proof-of-stake mechanism are:

Delegated proof of stake (DPoS). This mechanism – whereby all the owners of the funds delegate their capacity for validation to a node type known as a representative, thus avoiding directly exposing their funds – has been implemented to avoid the risk involved in exposing the wallet on the network.

Proof of Importance (PoI), whereby the reputation of the node is determined by its participation therein.

Proof of Authority (PoA), which will discuss next.

Proof of Authority (PoA)

The proof-of-authority mechanism can be understood as a variation of PoS. In PoA, validations are also done by validator nodes. The difference lies in that the capacity for validation is not determined by the amount of funds an account has but by the ‘identity’ or legitimacy thereof.

The ‘validator’ status gaining mechanism must be clear and known beforehand since the maintainability of the network depends on it.

Although some people think this algorithm is not decentralized enough (true), somehow it is a perfect match for consortium networks, whether private or semi-public.

In this type of network, the identity of the nodes is guaranteed by their position in the consortium or society such that a potential loss of reputation of the person who is in charge of a node fully discourages them from acting dishonestly.

Pros of PoA

The reliability of the network: How a rather specific group of users is responsible for the decision making and discourage malicious users.

There is no mining and, hence, network maintenance costs are low.

High transaction rates due to the fact that consensus is reached extremely quickly.

Cons of PoA

The transaction validation capacity is highly centralized.

Byzantine Fault Tolerance (BFT)

The BFT mechanism is inspired by the well-known Byzantine generals problem.

In this problem, a number of generals must simultaneously make a Boolean-type decision (yes or no: to attack or not to attack) in order for their military strategy to be successful.

The problem arises when the possibility that one or more of the generals is a traitor and provides erroneous information that can turn the attack into a defeat is introduced.

The idea behind it is to be able to tell whether one of the nodes is telling the truth when it transmits a message.

In the case of a blockchain network with BFT, all nodes must be known, so the network must not need to change frequently. These nodes exchange information which could potentially be correct or incorrect.

The algorithm is based on the honesty of the majority of validators such that, if one of them were dishonest, the others could expose the fraud attempt and repel it.

BFT makes sense when all the participants in the process know each other and are not inclined to change. A good example of this is the voting flat mates do when deciding whether to make home repairs or not.

The most relevant implementation of this kind of mechanism is Istanbul Byzantine Fault Tolerance (IBFT).

Pros of BFT

High network performance; by default, 1 second per block.

The transaction-associated cost is very low.

There can only be one possible block at any given time, which eliminates the risk of chain forking.

The nodes can take the power away from malicious nodes.

It requires a large majority (approximately 66%) of the nodes to validate the transactions, so the risk is minimized.

Cons of BFT

As is the case with all authority-based systems, there is a tendency toward centralization, which must be countered with modifications that require variations in the validations.

It is only works in small networks.

Conclusion

If you have made it all the way down here, now you know a little bit more about the different consensus algorithms and how they affect the characteristics of blockchain networks.

As it is often the case when technologies are compared, one should not think in terms of right or wrong: There are many different approaches, which should all be properly evaluated before choosing the one that best fits the goal in mind – assuming its limitations and mitigating its risks.

Furthermore, most blockchain implementations are open-source, so they is always room for the community to collaborate. And you, are you game?

]]>https://en.paradigmadigital.com/techbiz/who-is-in-charge-in-a-blockchain-network/feed/0Your company does not need tribes or squadshttps://en.paradigmadigital.com/techbiz/your-company-does-not-need-tribes-or-squads/
https://en.paradigmadigital.com/techbiz/your-company-does-not-need-tribes-or-squads/#respondMon, 14 Jan 2019 15:02:08 +0000https://en.paradigmadigital.com/?p=2385It would seem that in the past few months companies have finally become aware of the importance of culture as a fundamental pillar in their digital transformation process.

The problem is that this interest in culture has come in the form of a ‘teenage fashion trend’. Apparently all companies today have to embrace agilism and create tribes, even if they do not know full well what it is all about or understand the real impact this change will have.

Big companies are ardently devoted to practices, those miraculous remedies that can transform a company quickly and painlessly. These remedies are based on the same magical compound that makes cellulite-fighting creams, abdominal appliances and hair growth tonics work. The passion for shoehorning tribes and squads into companies stems from this school of thought.

It is mindboggling that we have not realized yet that such practices have a clear purpose behind them, and that practices that work in one environment will not necessary work in another one.

The Spotify model, for example, was created from scratch, on a trial and error basis, by a product company having specific needs. Spotify has been making decisions over time, such as using the name ‘squad’ instead of ‘scrum team’.

The term ‘squad’ stresses that teams are completely autonomous to decide between Scrum and Kanban. In addition, these squads have plenty of leeway when it comes to choosing tools and technologies and do not stick to common conventions too closely.

In most Spanish companies the context is very different from Spotify’s. They do not want teams to have that much freedom and prefer to establish both the methodology and the tools, technologies, and so forth in a more rigid way. This is so in order for teams to work in a more homogeneous manner.

Nevertheless, they still use the word squad because they do not understand either the logic behind Spotify’s decisions or the context in which they were made.

But the actual main problem is not that companies are copying the Spotify model in a context where it does not apply. There is an even more serious issue behind all this: Companies do not even need an organisational change when they are still at an early stage of maturity in terms of agilism and culture.

They simply do it because it is trendy, but have they even stopped to think about why would they need an organizational change?

If we do not understand the purpose behind all these changes, we will never make them correctly.

An organizational change must stem from an objective and begin with a change in people’s mentality that goes hand in hand with a change in the leadership model.

Once we have internalized these principles, we can move on to practices, applying them iteratively and incrementally in small groups before introducing them in the entire company.

However, this is always done the wrong way around. Companies jump directly to practices without first changing their mentality because they think they are going to save time. And they repeatedly make the same mistake:

A traditional department head will continue acting in an authoritarian manner if there is no change in the leadership model and no change in the operational processes – even if their department is now called a tribe.

A work team is not going to stop reporting to a project manager at a daily meeting just because the manager is now called a Scrum Master or because there is a special room for daily meetings.

A traditional project plan is not going to turn into a product backlog just because it is posted on a wall using sticky notes.

An army of agile coaches cannot transform a company on its own without a plan with goals and monitoring KPIs.

And thus a long list of practices that are useless without the right degree of maturity that companies need to have in order to be able to implement them.

Spotify did not come up with its model by copying it from another company; it built it from scratch to rectify its particular situation. And it did it after years of working with Agile, on the foundations of a deeply rooted digital culture.

Every company needs to forge its own path, and I firmly believe that organizational change should not be among the first steps that are taken.

Companies should copy Spotify’s culture rather than its organisational model. Copying the latter is nothing more than ‘makeup’. It is a new chassis for an old car whose engine keeps on breaking down.

Companies stay away from cultural changes at all costs because they take time. They want a quick fix, as cheap and painless as possible. But a new bodywork and paint job does not make a car run better, just as an abdominator will not flatten your belly if you do not use it 3 times a week and stop eating scones too.

Because if we start with the What (the practices) before understanding the Why (the principles), we will end up getting frustrated and fighting the change.

]]>https://en.paradigmadigital.com/techbiz/your-company-does-not-need-tribes-or-squads/feed/0Big Data Spain 2018: the 7th edition in 7 talkshttps://en.paradigmadigital.com/techbiz/review-big-data-spain-2018/
https://en.paradigmadigital.com/techbiz/review-big-data-spain-2018/#respondMon, 07 Jan 2019 08:11:42 +0000https://en.paradigmadigital.com/?p=2372Although we are already looking forward to Big Data Spain 2019, we would like to start off the year with a brief overview of this year’s edition.

After more than 90 speakers managed to bring together almost 1700 attendees, today we look back at Big Data Spain 2018 to remember the talks that had the greatest impact.

Thus, we tell you how the 7th edition of the conference went with the help of these seven clips. Do not miss them!

Patrick E Rodi’s inspirational speech was the keynote of Big Data Spain 2018. This professor of Mechanical Engineering at Rice University (Texas) talked about data management in the aerospace field and how IA and big data are changing the way NASA – with which he has been collaborating since 2007 – works.

There is no question that 2018 has been Nuria Oliver’s year: She was named ‘Engineer of the Year’ and closed the year by being inducted to the Spanish Royal Academy of Engineering. In this last edition of Big Data Spain she told us how big data can be used to improve the world.

How can data and design be combined in creating digital products? Elena Alfaro put the spotlight on this interesting issue in her talk this year. From it we drew interesting ideas, conclusions and concepts.

We are in the midst of the era of the voice, and the rise of virtual assistants is proof of this. However, 97% of companies, regardless of size, do not communicate through sound. How can our brand increase its impact and generate emotions through music? The soundtrack composer Alfonso G Aguilar told us about it in the closing talk.

]]>https://en.paradigmadigital.com/techbiz/review-big-data-spain-2018/feed/0Which technologies will triumph in 2019?https://en.paradigmadigital.com/techbiz/which-technologies-will-triumph-in-2019/
https://en.paradigmadigital.com/techbiz/which-technologies-will-triumph-in-2019/#respondMon, 31 Dec 2018 08:28:04 +0000https://en.paradigmadigital.com/?p=23782018 is coming to an end, a year where we have seen many technologies burst onto the scene; some of them have definitively become a part of our lives, whereas others have reached the end of the line and we have bid them adieu.

Nevertheless, come this time of the year, the million-dollar question is: what does 2019 have in store for the IT industry? Which technologies will win us over in the coming months?

Of course, we do not have a magic ball to see the future with but we can still draw some patterns from the main technological fields and intuit where the future of technology lies – at least in 2019.

Multi-cloud, hybrid cloud

Many business models have gone cloud in recent years. The cloud has become the best environment for developing digital products.

A greater degree of maturity in the adoption of cloud solutions by Spanish companies has been observed in 2018. Trends such as native cloud developments and the implementation of multi-cloud or hybrid cloud strategies will continue to become consolidated throughout the next year.

What are its advantages? Choosing among different cloud providers is an important business decision. Selecting just one solution is usually enough for start-ups but not for experienced firms.

A multi-cloud approach can help to tap into the wide range of services offered by different providers. Likewise, companies that devise a hybrid strategy gain significant advantages.

Creating a private cloud by modernizing their IT ecosystem and combining it with one or several public clouds allows them to have the best of both worlds: they obtain all the flexibility of the cloud whilst complying with highly stringent regulatory frameworks, they store their data locally and they keep paying off their investment in infrastructure.

Chatbots and human-computer interaction (HCI)

Lately we have seen how chatbots have become a reality. Their rise has given us an idea of the capacity for interaction between humans and machines. We are referring to augmented reality and virtual reality and the possibilities they afford.

But why can 2019 be their year? The rate at with which artificial intelligence software is improving and the rise of Silicon Valley technologies (such as those from Google, Facebook and other digital giants) is making them grow and develop at the speed of light.

There is no question that their interaction with human beings will be increasingly greater and they will be ever more present in our daily lives. Companies in other industries are not unaware of this radical change in the interaction with users and some of them, such as JustEat, Burger King, Mahou or Destinia, already provide different experiences.

DevOps

In order to shorten the time to market and minimize human error, organizations are beginning to understand how important it is to bring into their ecosystem the tools that will enable them to automate the coding, deployment and execution of their applications.

But this is not only about tools and automation. DevOps culture is reaching the necessary state of maturity for organizations to start adopting it in their work models. A culture that is based on small multidisciplinary teams focused on adding value to the business through the product.

Both companies and public administrations are directing their products toward the end consumer, so guaranteeing the transparency of and trust in their data throughout this process is a differentiating factor.

Blockchain has gone from being a buzzword to one of the tools with which a new way for consumers to interact with both companies and organizations and among themselves is to be built.

In just a few years we will look back and will not be able to think about banking without blockchain and we will hardly remember what it was to have to look for an old receipt to claim a warranty for a past purchase or repair because having an unchanging record with all that information (and a lot more) will seem more than obvious.

2019 is looking to be a key year when we will start to enjoy what was promised.

Big Data in the cloud

Big data is not something new. Most companies have invested a lot of money in the past few years to deploy big data technology and build large data lakes in which to keep their data and put them to good use.

However, the latest movements in the market point to the future of big data being in the cloud. Large cloud technology manufacturers are providing on-demand services, which allows the time to market to be shortened and large amounts of resources to be available for a limited time, and paying only for those resources that are actually used.

The IoT

We live in a world where we want to move daily life into the digital world to offer users experiences that are totally different and have never seen before. This creates a variety of new challenges that are thrilling in terms of both scalability and security.

This is leading to an increased introduction of IoT platforms which, together with big data and AI solutions, are an ideal solution for creating innovative services.

Up to now, the complexity of tackling a machine learning project was high, but cloud technology manufacturers are providing solutions that simplify the process of training and productivization of machine learning models. This will make it possible for many companies to have access to this technology, and 2019 will be the year of the machine learning boom.

Quantum Computing

Quantum computing is the ‘next big thing’. The first quantum computers that are ready to be put to use in production have appeared in the past few months.

This technology, which is based on the use of qbits instead of bits, will put a calculation power unknown till now at our disposal.

Thus far, the physical problems that are intrinsic to this technology had confined it to the academic and R&D spheres, but it seems that this barrier is starting to crumble and that in the next few years we will start to hear a lot about quantum computing and its applications.

]]>https://en.paradigmadigital.com/techbiz/which-technologies-will-triumph-in-2019/feed/0If you think design is expensive, try not designing at allhttps://en.paradigmadigital.com/techbiz/if-you-think-design-is-expensive-try-not-designing-at-all/
https://en.paradigmadigital.com/techbiz/if-you-think-design-is-expensive-try-not-designing-at-all/#respondMon, 17 Dec 2018 08:13:23 +0000https://en.paradigmadigital.com/?p=2368You probably are familiar with this wonderful quote: “If you think education is expensive, try ignorance”. And it is not even from Abraham Lincoln’s! Apparently Derek Bok, a President of Harvard University, said it.

If we had to explain why education is important, we would not lack in arguments, but surely they would lead to a long discussion with philosophical connotations about progress and rights. Bok’s phrase is brilliant because it quickly synthesizes a complex concept by means of a brutal contraposition of ideas.

The same happens with design. There is no shortage of arguments to defend how bringing in – good – designers can help your business, but it can take a long time to explain it, with messages that are probably too dense and specialized.

That is why I have tried to emulate Bok, but since my feeble copy will not be enough to convince you, I am going to redouble my efforts and give you three examples where poor design – not only graphic design but also at different levels of impact – has cost a lot.

After their initial confusion at seeing what was written on the card, they erroneously assumed that the actor’s name had been printed by mistake but that the name of the film which appeared underneath was indeed the winner. Finally, they chose to read the name of the film, thus giving rise to a moment of folly that has already gone down in the history of entertainment.

The Academy’s logo appeared prominently at the top of the card, which is absolutely irrelevant to presenters. The winner of the award and the film in which she acted appeared below with the same font size and typography. At the bottom, the most important thing: the category in question.

As can be seen in the image below, the minimum alterations that should have been made would have been to put the award category at the top, then, unequivocally, the name of the winning artist and finally – and optionally – the logo of the event.

Before (left) and after. One of the suggestions for improvement that were made (credits)

Besides the design of the process that governed the handling of envelopes (the main cause of the error), a good graphic design of the cards would have made the mistake stand out to the readers and the ensuing great embarrassment would have been averted.

The scandal at the Oscars 2017 embarrassed the Academy and almost ruined its prestige and put an end to 83 years of relationship with PwC.

Bonus track: In case you do know about it, the way an incomprehensible ballot design changed US history in 2000 is also legendary.

An important US retailer hired Jared Spool’s team to review the usability of its e-commerce user interface: the arrangement of fields, contrasts, the visibility of the call to action, etc. Also to study the high rate of basket abandonment at the last step, where customers were asked to identify themselves to complete the purchase.

Nothing was particularly badly designed in this last step, but after holding several interviews with users and collecting data, they obtained – among others – the following pearls of precious information:

Customers gave the following answers: “I didn’t remember my password”,“I didn’t want to sign up/login”, or, even worse, a categorical “I came here to buy stuff, not to enter into a relationship with a company”.

The analytics revealed that some users had created and abandoned up to 10 accounts!

The team in charge decided to just meet the demand of users by adding a simple but revolutionary “Continue” button that allowed them to proceed to checkout without having to sign up or log in.

And although it may now seem absurd (since this would require customers to enter their shipping address and other data for each purchase), this was users preferred, particularly back then.

As Spool stated in 2009 in his famous article “The $300 Million Button,” this change in the purchase flow increased the revenue of this e-store by 300 million dollars a year.

This had nothing to do with visual design or usability. A proper quantitative and qualitative research into product design processes helped to think “outside the box” and thus trigger the transformation of an apparently correct and fixed purchase flow.

Surprisingly, despite the huge impact this strategy had and its having been accepted as a good practice, today many e-stores still require customers to either sign up or log in to complete their purchases.

The “customer journey” level: A very poor customer experience design

In 2017, Dr David Dao was waiting in his seat for his United Airlines flight to take off when it was announced that four passengers would had to step off the plane as a result of overbooking issues.

Three passengers did so on their own accord, after accepting United Airlines’ financial offer, but one more needed to vacate their seat. When the doctor was chosen by lot to leave the plane, he outright refused to do so, so he ended up being violently dragged out by airport security.

Dao suffered a broken nose and lost two teeth

The video of the incident made the headlines of social and traditional media all over the world, causing enormous reputational damage to the world’s third biggest airline. The company’s elusive first reactions cost the CEO his job a few months later, and nearly $1 billion were written off its market value in just two days.

Several design tools aimed at correctly defining business processes and customer journeys could surely have helped United Airlines to know the needs and concerns of people in similar situations, thus turning the – technically legal – dreadful process of “relocation” into a fair, viable alternative.

Conclusion

Design is sometimes too big a word. It is hard to explain to people that design as a discipline goes far beyond screen design and even user experience design.

It encompasses a series of methods ranging from the strategic to the tactical aimed at improving products, processes and services that have an impact on people’s lives.

On certain occasion Charles Eames was asked: —Mr Eames, where are the frontiers of design? To which he answered: Where are the limits of problems?

]]>https://en.paradigmadigital.com/techbiz/if-you-think-design-is-expensive-try-not-designing-at-all/feed/0Is your office more like a library or a bar?https://en.paradigmadigital.com/techbiz/is-your-office-more-like-a-library-or-a-bar/
https://en.paradigmadigital.com/techbiz/is-your-office-more-like-a-library-or-a-bar/#respondMon, 10 Dec 2018 08:39:39 +0000https://en.paradigmadigital.com/?p=2366Time management is a kind of superpower that some people have and allows them to use their working hours more efficiently, making their work be of a higher quality and shine much more, and also with less stress.

It is a special superpower, not because you are born with it, like Superman, but because anyone can acquire it with a little learning and practice. So if you are reading this, you should get started as soon as possible, because once the superpower begins to emerge it will give you an incredible advantage in your job and your life, and its effect will last forever.

It is useless to have this superpower at work if your co-workers do not have it, since although this superpower can allow you to do incredible things, it does not replace the strength of a team working together.

To do really great things we have to work as a team, and this means that we not only have to worry about our productivity but also about creating a productive environment for the team.

In other words, not only do we have to make good use of our time but we also have to be careful not to waste the time of others.

We must assume that we need things from other people to do our job; we need to interact and collaborate, but at the same time we have to accept something very important: That the others must also achieve their objectives and their tasks are likely as important as ours. Therefore, we do not have the right to interrupt them if it is not for a good reason.

If time management is a superpower, we could say that the villain of the film would be interruptions. Although more than a villain I would say it is like some sort of virus. A contagious virus that spreads rapidly in open workplaces without offices.

Actually, having more open workspaces has been a boon to companies at all levels, but they are not perfect: they lead to too many interruptions.

To exaggerate a little, we could say that offices should have an atmosphere similar to that of a library and in many cases they are more like bars.

In this complex environment of “collaborating but without interrupting,” it is essential that before each act of communication we stop for a second to think about and choose the most appropriate communication channel. In order to do so, we need to keep the following in mind:

The amount of information and its complexity. Whether it is something that requires an explanation to avoid misunderstandings, whether it requires preliminary context or supporting documents, or whether it is a short message that does not require any context.

The urgency of the answer. Whether we need the person to answer immediately because the matter is urgent or whether their answer can wait.

The number of interruptions created. There are synchronous communication channels, which force the other person to stop what they are doing, and there are other channels that allow them to answer when they are free.

Based on these three things, let us see now which channel works best in each case and some tricks to manage your new superpower.

Email

An endemic evil nowadays is to think answering emails is a job. We spend many hours a day answering emails, but we should not forget that email is nothing more than a communication tool, not a job in itself. It is a means, not an end.

It is advisable to not keep your email service open at all times to avoid distractions, and also to disable notifications and to check your email only during certain times in the day. We must take it as an asynchronous channel, that is, if you send an email you should not expect an immediate response. It allows sending documents and diagrams and also for a person to communicate with many.

Since it is a written channel, it is prone to misunderstandings. Therefore, we should avoid sending messages without the right context and also avoid using it for discussing things. Sometimes a 20-minute conversation fixes what a string of 10 long emails was not able to.

In order to minimize the number of emails, it is interesting to use short, direct answers and to avoid open-ended questions. Instead of asking “do you want to meet next week?” it is better to ask “do you want to get together Monday or Tuesday from 4.00 pm to 5.00pm?”

In more traditional environments it is often used to “leave a record” of communications with a view to holding people accountable in the future. It is also used as a collaborative work tool, and even as a document repository. It is important to avoid these misuses in order to minimize the number of emails that are sent.

Phone

The phone is a synchronous voice communication channel. It is appropriate for really urgent matters, as it usually causes an interruption on the other side.

It is advisable to silence it in meetings and moments of concentration so as to not have distractions.

It is a channel that allows you to give context and talk, so it is very suitable for solving matters quickly when you are away from the office and to avoid misunderstandings.

It is also very useful for making the best of time in certain contexts such as driving (always with a handsfree car kit) or riding a means of public transport (without giving voices).

Chat

It is perfect for fast, asynchronous communication and a good tool for teamwork when quick messages are the content.

You have to accept that it is somewhat asynchronous, namely, you have to assume that the other person will answer you when they can.

In that sense, I think that perhaps we should use states more as people usually appear as “available” in the corporate chat when actually they are busy.

It is a tool that allows you to communicate with people who are in a meeting, something which, far from being good, can cause them to lose focus on the meeting.

Meetings

Managing meetings is another of the main problems in today’s companies, where we have reached a point where all tasks revolve around the calendar and where anyone can occupy a slot in someone else’s calendar.

In many offices, actual work is done in the few scattered moments that are left free in between days chock full of meetings. Or at home at night, which is even worse.

Meetings split the day in two: before and after the meeting.

It is the perfect channel for having constructive discussions. But before using it think about the following things:

Meetings should have a specific goal. What should we achieve by the end of the meeting? Surprises are cool but most of the time they are unnecessary, so I propose to you to not attend any meeting that does not have this matter covered beforehand.

Do not call people “as a courtesy”. The more people there are in a meeting, the harder it will be for it to be productive. According to this, you have to select those participants that can really contribute to the discussion. You should avoid holding meetings where people are on their laptops or mobiles phones. And it is necessary to stop the same people from always speaking – because those who speak the most are usually not the ones who have the most to contribute.

We must choose the time well. You have to take the schedule of people and other factors into account, such as that right after lunch people are less active than early in the morning for a dynamic group meeting.

Working hours must be observed. We should call a meeting with a defined start time and end time. Those meetings in the evening with no end in sight, where you do not know if you will get home in time for dinner, are a thing of the past

Meetings must be prepared in advance. If a meeting is called to review a document, the document should be sent in advance so that everyone can read it before the meeting takes place. In a meeting, a person who is not as well informed as the rest wastes everybody’s time.

They should not occupy your entire schedule. It is important for you to save a few hours every day for working nonstop, or even to propose to yourself initiatives such as the no-meetings day. Or, why not, you could suggest it be introduced at a global level at your company.

Remote meetings. Many of the meetings we hold do not really need us to be present and could be held either by audio conference or video conference and thus avoid unnecessary trips. This will both save time and be good for the environment.

Most importantly, the agreements and conclusions must be clear at the end of the meeting. Meetings in which decisions are made but which are not turned into actions later and fall by the wayside are very common. This is why we must leave meetings knowing the what, who, when and how of each of the points discussed.

Conclusions

Can you imagine working in an environment where everybody puts these office productivity concepts into practice? Sometimes we answer questions that we receive over the wrong channel. Sometimes we do it out of politeness and others because we accept that the boss simply has scant free time and chooses the channel they prefer – that is why they are the boss.

But other times it is better to say NO. This is necessary to get the people around you to value your time and also to start thinking about choosing the right time and channel.

Do you remember the last time you worked half a day nonstop focused on something? If it was a long time ago, this means you have a big problem, so the first thing is to acknowledge it.

We cannot accept interruptions as being something normal because an hour of quality work consists in 60 straight minutes of work, not 4 blocks of 15 minutes distributed throughout the day. The cost that the change of context has for the human brain is enormous. This is why one hour of quality work is approximately 10 times more productive than if we were to split it in four.

]]>https://en.paradigmadigital.com/techbiz/is-your-office-more-like-a-library-or-a-bar/feed/0What are key life events and why are they so important?https://en.paradigmadigital.com/techbiz/what-are-key-life-events-and-why-are-they-so-important/
https://en.paradigmadigital.com/techbiz/what-are-key-life-events-and-why-are-they-so-important/#respondMon, 03 Dec 2018 08:37:28 +0000https://en.paradigmadigital.com/?p=2362Answering the first part of the question is simple: key life events are the big moments in a person’s life. For example, moving out of your parents’, getting your first job, getting married, having a child, buying a house…

The message is not new. It is a concept that has been used in marketing for years. So why am I talking about it? Why is it important to take it into account when designing digital products?

A good way to understand how important this concept is, is to think like a company. If I were a bank, would I not like to know when someone is thinking about buying a house? Then I could be waiting in the wings to offer them a mortgage. I would win a customer for 30 years!

The same goes for insurance companies. Would it not be wise for them to show up when I am looking to buy a new car, a house…?

Even without putting the big banking and insurance firms into the equation, how many things do you need to book and buy for a wedding? Would catering companies, tailor’s shops, bridal stores or wedding pages not be delighted to know when someone is fixing to get married to appear in their life?

For companies, key life events are big opportunities to get in touch with their customers. In fact, life moments are almost always emotional, and hence memorable, for customers.

A key life event is also a unique opportunity for companies as it allows them to become part of an important memory together with the consumer and creates an opportunity to create a lasting bond between the consumer and the business. This is crucial given the current promiscuity of users.

But it is not only useful as far as marketing campaigns are concerned: Knowing the life moments that customers are (or will be) living will help us to determine the message and the functionality of our digital products, and this means redefining the entire customer experience.

Thanks to digitization, it is now when the greatest advantage can be gained from key life events, when the entire customer experience can really be redefined on the basis of these moments. This is mostly possible for three reasons:

The massive use of digital channels users and customers make during the day has made the capacity to collect data on their activity and interests to exponentially grow.

The emergence of big data and machine learning technologies has added a lot more value to exploiting all the collected data and identifying, and even predicting, the life moments of customers.

The possibility that digital technology and channels afford in terms of customizing contents, messages and products… in short, the customization of the entire customer experience.

In this scenario, key life events are the perfect opportunity to start a conversation with users. However, certain aspects must be taken into account in order to achieve the desired effect of engagement with customers.

It is very important to respect the privacy of customers.

Users should not be overwhelmed with messages; you have to choose the right time and come up with the right message.

The message to be conveyed works much better when companies provide effective solutions to actual problems.

If you are working on designing a strategy based on key life events, it is essential for you to measure how the customization of messages, communication actions, etc. is working.

Using these moments to get in touch with customers is much more satisfying than a traditional marketing campaign. In fact, it is up to 10 times more profitable in terms of response rates.

That is why it is worth stopping to think about the above and design a strategy that takes key life events into account to be able to win the hearts of customers.

]]>https://en.paradigmadigital.com/techbiz/what-are-key-life-events-and-why-are-they-so-important/feed/0When democracy is not the best solutionhttps://en.paradigmadigital.com/techbiz/when-democracy-is-not-the-best-solution/
https://en.paradigmadigital.com/techbiz/when-democracy-is-not-the-best-solution/#respondMon, 26 Nov 2018 08:02:14 +0000https://en.paradigmadigital.com/?p=2354In a traditional company, decisions are usually made following an authoritarian model. Decisions are issued from the top layer of the organisational structure and are communicated downward, without the bottom layers being given any chance to voice their opinion.

The more hierarchical the company, the further the people who make the decisions are from its day-to-day operations. This leads them to use partial or biased information. This is why decisions made according to this model are typically not the best.

Responsibility is usually not delegated in these companies. Therefore, decisions are escalated to the top and travel back down once they are made, which is a rather slow process.

In addition, the ‘Chinese whispers’ problem is fairly common since what arrives to the final recipient in the chain is not always the same that was said at the start of the chain (in between there are misunderstandings, personal interests, politics, fear…).

Another problem this model has is that it follows principles that are similar to those of a military organisation, that is, it does not encourage healthy debate. People are often very afraid to voice their opinion, especially if they are not in line with their boss’, and leave out information that can be very relevant.

Therefore, this decision-making model does not make any sense in digital companies, which require a greater degree of freedom and responsibility to deal with innovation and the continual changes the new environment demands.

Foto: Roy Luck

So, how are decisions made in an agile company?

It is a widespread misconception to think this kind of company makes all its decisions in a democratic manner, something which is actually far from the truth.

Making decisions by consensus is slow, and involving everybody in every decision is exhausting. Furthermore, sometimes it is not even the fairer thing to do: Why should everybody give their opinion, without the necessary context, about something that only affects an area of the company – which, in addition, already has its own manager? Why does the opinion of someone who is not familiar with a technology have to carry the same weight than that of someone that has been working with it for years?

If the democratic model is overused, these situations can be very frustrating. But what is the alternative then? How are decisions made in an agile company?

The alternative is the Advice Process model, whereby any person in the organisation may make a decision pertaining to their job, but before doing it, they must discuss it with the people who will be affected the most by the decision and with the most knowledgeable people in the respective field.

It is not about reaching a consensus. This person is not obligated to follow all the advice they receive, but they should at least listen to it and take it into consideration before reaching a decision. The more important the decision, the greater the number of people that should be involved in its making.

Normally, the person who makes the decision is the person who detected the opportunity or who will be the most affected by the decision, regardless of their position. In addition, this person will be responsible for the decision and its consequences.

This ‘advice’ model overcomes the problems of the authoritarian model and of the democratic model we saw above. It avoids both hierarchical validation and consensus, opting instead to get the experts in the subject matter (meritocracy) and the people who will be most affected by the change involved.

It is a decision-making system that we use quite often and almost instinctively here at Paradigma without needing to set up a formal process. It is very agile when it comes to making decisions, it helps to build a sense of community, it encourages the proposal of initiatives from any area in the company, it fosters humility, learning and decentralising the decision-making processes and, above all, it results in better decisions.

]]>https://en.paradigmadigital.com/techbiz/when-democracy-is-not-the-best-solution/feed/0The most common mistakes when designing digital productshttps://en.paradigmadigital.com/techbiz/the-most-common-mistakes-when-designing-digital-products/
https://en.paradigmadigital.com/techbiz/the-most-common-mistakes-when-designing-digital-products/#respondMon, 19 Nov 2018 08:47:35 +0000https://en.paradigmadigital.com/?p=2351It is probably an example of ‘professional deformation’ but every time I interact with a website, a mobile app or a service I cannot help but to put them under the microscope.

I applaud when I have had a good experience and I even wish I could commend the people that work in the business or customer experience areas on the good idea they had or the usefulness of what they did.

Likewise, I would love to be able to talk to them when I see that they could improve small things to bring about significant changes without having to make big investments (as they are surely doing).

Since it is impossible to reach all those people individually, I would like, by means of this post, to give them some examples – should some of them end up reading them – and thus be able to help them.

Careless design

A hospital group puts a very complete customer area at its patient’s disposal. I can make appointments and change them, I am told when I need to stop by to pick up test results, I can see diagnosis reports or tests, I can request certificates of attendance… I cannot think of much else I could ever need. But… the design is awful! It looks like an application straight out of the very beginning of the Internet.

It is true that having a ‘careless design’ is sometimes brandished as an excuse to make users believe they will get lower prices, but this is not so. This is a clear proof that the website was designed a while back, and even though the tool has evolved, the design is still rooted in the past.

With a little design effort (not even UX) they could have that wow! effect on customers that is so sought after, and their private area would have a brand image worthy of a hospital group.

Moreover, to prevent this from ever happening again, I would advise them not only to develop according to agile methods but also to include design in these work cycles.

Thus, the image of the digital product will not be defined in the early phases of the construction of the product and then completely forgotten during the development and subsequent maintenance stages.

Now they will be able to work on the product image on a cycle-to-cycle basis, throughout the entire lifecycle of the product, to update and refine it as needed. The digital product will thus be always up to date as regards functionality and look.

A pretty product, but it does not work

I am sure you all could provide a lot of examples of this. I am also sure that one of them is an airline’s website where it takes you three or four tries to purchase a ticket.

Or some e-commerce site which, as soon as it posts special discounts or has a sale, cannot handle the traffic and buying anything is impossible.

Companies suffering this problem understand the importance of digital channels because their website (or mobile app) is their main sales channel. Once they saw the importance of digital channels, they focused on their user experience and created an attractive design, but they forgot about technology and/or are based on obsolete systems.

The first thing these companies should do is to automate quality and eliminate the risk in every release by monitoring the most critical functionalities by means of a battery of automatic tests.

In addition, they need to invest in updating their back-ends and hosting to have high-availability environments. Thus, they will be able to increase their turnover and even to optimize their costs. Investing in technology is necessary to be competitive on the Internet.

The website and the app look like they belong to different companies

An insurance company has a website and an app where you can do more or less the same things.

However, the only thing they have in common is the company logo. They look like they belong to different companies!

And although mobile adaptation is understandable, or they might have been created at different points in time, changing the browsing method, the interaction elements and so on just confuses and frustrates users, who will waste time trying to find the place where and the manner in which they need to do what they came to do in the first place.

Thus, in this case, before keeping on creating products separately, it is worth stopping to merge the usability and the design of the different channels, coming up with a complete service design and creating an omnichannel experience that gives a consistent brand image and unifies communication and interaction with users.

Too far from users

A retailer, focusing on innovation and differentiation, creates services and new ways to buy that are really useful and different from those of its competitors. Yet customers are not aware of them, so hardly anybody uses them.

Paying attention to the product is as important as coming up with the right launch and operation strategy. Running ad campaigns, communicating with users and encouraging the use of the new services are all important for a product to be successful.

Conclusion

Paying attention to user experience and supporting it with a technology that makes it possible (scalable, resilient, maintainable), and devising a good communication strategy are essential for a digital product to be successful.

There are many situations in which a company might forget about any of these things, which might prevent it from providing a good service to its customers.
However, unless the situation is truly dire, there are always small things we can do to substantially improve our product without having to redo it all over again.

]]>https://en.paradigmadigital.com/techbiz/the-most-common-mistakes-when-designing-digital-products/feed/0How are devices managed in the IoT?https://en.paradigmadigital.com/techbiz/how-are-devices-managed-in-the-iot/
https://en.paradigmadigital.com/techbiz/how-are-devices-managed-in-the-iot/#respondMon, 12 Nov 2018 08:14:06 +0000https://en.paradigmadigital.com/?p=2346If I were to tell you that the IoT is experiencing fast growth and that it will keep doing so for the next few years, you would answer that I am not telling you anything new. In fact, you would be right. I even said so myself in this post: we already talked about it.

Thus, to take things one step further, in this post I will focus on one of the main reasons why the growth of the Internet of Things is unstoppable: the importance of connected devices, which are continually gathering and transmitting data around the clock and have become natural drivers of the IoT.

To better understand what I want to convey to you, imagine you are the mayor of a town called – pardon the pun – Paradigma-by-the-sea. You want to offer a service that will allow the townsfolk to know how many parking spaces there are available for parking around town. Initially the goal would be to provide the following:

A map of the town divided into neighbourhoods showing which spots are free for parking.

Any parking restrictions, such as loading zones.

The rates (if any).

In addition, thanks to your town’s robust birth rate and the large manufacturing sector that is developing, you would like to be able to expand the service as the town grows and more streets are paved.

In order to build this solution, we are going to use sensors capable of detecting when a vehicle is above them and communicate this to our cloud platform.

In this scenario there are different device-related challenges:

Inventorying the devices

As you can imagine, it would be necessary to have an inventory of all devices per domain: neighbourhood, street, type of parking, and so on. Thus, we will be able to carry out actions on the devices as a group and also to see the entire installed base.

Of course, having a dynamic, organised inventory is an important task in our project.

Expanding the coverage of the service

Trusting your town will grow, we need to avail ourselves of certain mechanisms for including devices in our IoT platform in an economical, transparent manner.

In this regard, the ideal scenario would be for the devices themselves to be capable, once they have been physically installed on the streets, of registering themselves in the platform and starting to send data from the very beginning.

Improving our service with updates

Assuming our current solution has been well received by the townspeople, we would like to offer them more features and, above all, keep all the devices updated with the latest safety patches and software versions.

Once again, having thousands of devices deployed around the town transforms a relatively simple problem such as physical manipulating a device and updating it into a major headache.

As far as the IoT is concerned, we should have mechanisms that allow us to do all kinds of updates in a remote, unmanned, automatic manner, thus bringing down the service’s operating costs.

Monitoring

We would like to offer the people a quality service and, in order to do so, we must be certain that all devices are working properly and are not degrading over time, and if they do, to be able to take appropriate measures. Having real-time information from every device is crucial in this regard.

Accordingly, we would like to go one step beyond the monitoring proper and set up a predictive system based on machine learning.

With the data history we could train a model and detect behavioural patterns that would help us to anticipate potential problems, which would result in a better quality of service.

Managing the settings

The life of a sensor is complex and ever-changing. Sticking to our example, loading zones would be created and terminated according to the commercial activity on the streets. Hence, a sensor situated in a parking space could at a certain time become part of a loading zone.

Once again, we must be capable of changing the settings of the sensors without having to physically go to their locations to alter their operating mode.

Another example would be if we noticed that the sensors were taking a high number of samples (to see whether there are vehicles parked or not): 10, for example.

Once the service has been up and running for a certain amount of time, we might find out that we can provide a service of the same quality with a lower sampling rate, e.g. once every minute. Moreover, to carry out this action, we can rely on our organised inventory.

Limited connectivity

In this present case – thousands of unmanned sensors, it would be complicated to guarantee full connectivity of our sensors to our IoT platform. Our system must guarantee that the measurements are delivered no matter what, even if it must be on a deferred basis.

We will thus be able to have all the data generated over time at our disposal and to analyse them to detect patterns of use of the parking spaces.

If something stands out in the IoT ecosystem is that, in the past few months, the main public clouds have echoed these challenges and developed diverse products as part of their already wide ranges of solutions.

Amazon

Amazon has been investing in the IoT for a while now and rounding out its product catalogue to meet the needs of this technology and its use cases. So much so that it become the steward of the FreeRTOS project – a real time operating system for microcontrollers – to give it a boost and make it a part of its solution.

Google suggests we use the following services to cover the needs of our devices:

AWS IoT Core: It takes care of transmitting messages between the devices and the cloud. Furthermore, it offers capabilities such as message filtering, security, etc. As you can see, the name does it justice as it is the main component.

It gives the ability to work in intermittent connection situations. It does so by gluing messages together and sending them when the connection is back.

It manages the settings of the devices via shadowing. Hence, we can adjust their behaviour without having to physically access them.

Monitoring: It provides a connection to AWS CloudWatch to concentrate the logs of our devices.

AWS IoT Device Management: With this component we will be able to create groups of devices and create sets of attributes that we can assign to them (e.g. location, device type…) so as to be able to properly manage the inventory of devices. It will also allow us to issue device update jobs. A job is just a series of commands that the device must be capable of interpreting and executing.

By combining the aforementioned services, adding new devices is a straightforward thing to do by means of an auto-discovery feature that works in a secure, scheduled manner.

Google Cloud

As far as Google is concerned, the company centralises the needs of the devices around Google IoT Core, but there are some functionalities, such as e.g. the procurement service, that are only available in an early access mode to partners like us.

Evidently, Google’s technical solution differs from Amazon’s and proposes different mechanisms, but it also allows:

The inventory of devices to be easily handled via an administration console and procurement utilities.

The information on the status of each deployed device to be kept.

Commands or actions to be executed in the devices themselves.

The devices to be monitored via Stackdriver.

In Google’s case, its device solution is associated with Android Things, an extension of its Android platform that is focused on a great variety of consumer, retail and industrial devices.

If you are familiar with Android, you will find it very intuitive and easy to build your application and to use the tools Google provides to overcome the challenges we have discussed above.

Conclusions

With this Paradigma-by-the-sea example, I wanted to illustrate the difficulties of integrating the digital and analogue worlds – which is precisely the aim of the IoT – by means of a use case with specific needs.

In the end, the IoT – and everything it involves – sets for us a new set of challenges that we must face. In this case, I wanted to mention the devices, but I could have just as easily talked about security, performance or even the specific use cases for which this technology is ideally suited.