Real IoT production deployments running at scale are collecting sensor data from hundreds / thousands / millions of devices. The goal is to take business-critical actions on the real-time data and find insights from stored datasets.
In his session at @ThingsExpo, John Walicki, Watson IoT Developer Advocate at IBM Cloud, will provide a fast-paced developer journey that follows the IoT sensor data from generation, to edge gateway, to edge analytics, to encryption, to the IBM Bluemix cloud, to Watson IoT Real Time rules/actions, to visualization, to Node-RED processing, to cloud storage, to Data Science Experience, to Apache Spark analysis in Jupyter notebooks, to PixieDust visualization and finally machine learning algorithms. You don't need to be an expert data scientist to look for predictive maintenance insights using Watson IoT.

Because not everything the internet offers is suitable for all users, organizations use web filters to block unwanted content. However, filtering content becomes challenging as networks speeds increase. Two filtering architectures are explored below, along with criteria to help you decide which option is the best fit for your organization.
To ensure service level and capacity as internet traffic increases, organizations need higher-speed networks. In telecom networks, to serve hundreds of thousands of users, 100 Gbps network links are introduced to keep up with the demand. Today, the market has reached a state of maturity regarding solutions for web content filtering at 1 Gbps and 10 Gbps, but filtering at 100 Gbps poses a whole set of new challenges.

"DivvyCloud as a company set out to help customers automate solutions to the most common cloud problems," noted Jeremy Snyder, VP of Business Development at DivvyCloud, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.

Interested in leveling up on your Cloud Foundry skills? Join IBM for Cloud Foundry Days on June 7 at Cloud Expo New York at the Javits Center in New York City. Cloud Foundry Days is a free half day educational conference and networking event. Come find out why Cloud Foundry is the industry's fastest-growing and most adopted cloud application platform.

Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value extensible storage infrastructure has in accelerating software development activities, improve code quality, reveal multiple deployment options through automated testing, and support continuous integration efforts. All this will be described using tools common in DevOps organizations.

This past week HPE continued buying into server storage I/O data infrastructure technologies announcing an all cash (e.g. no stock) acquisition of Nimble Storage (NMBL). The cash acquisition for a little over $1B USD amounts to $12.50 USD per Nimble share, double what it had traded at. As a refresh, or overview, Nimble is an all flash shared storage system leverage NAND flash solid storage device (SSD) performance. Note that Nimble also partners with Cisco and Lenovo platforms that compete with HPE servers for converged systems.
Earlier this year (keep in mind its only mid-March) HPE also announced acquisition of server storage Hyper-Converged Infrastructure (HCI) vendor Simplivity (about $650M USD cash). In another investment this year HPE joined other investors as part of scale out and software defined storage startups Hedvig latest funding round (more on that later). These acquisitions are in addition to smaller ones such as last years buying of SGI, not to mention various divestures.

Recently an analysis was completed by SmartBear to gauge the sense of what software professionals believe is the core value provided by API virtualization. It was concluded that software professionals, including developers, testers, managers and architects believe that the biggest benefit of virtualization is that it brings teams together by allowing them to collaborate. In total, 18% more respondents indicated that virtualization has more value in uniting teams than it has in adding speed to delivery or reducing costs.

Officially,virtual data rooms are defined as digital databases in which companies can store and easily share data with third-parties, usually during a business deal. In the past five years, the industry for VDR has boomed and by this year is expected to reach $1.2 trillion according to research firm IBIS.

With the complexity of today’s applications, it’s easy to end up in a situation where all of the pieces of your code aren’t ready at the same time. As a developer, you might be waiting for a third-party API to get updated, a partner organization to finish their code, or other teams in your organization to have a component ready to start testing against. This can be a drag on your organization’s entire release schedule, as testing is backed up waiting for all the pieces to be finished.

Server Virtualization has transformed the way we manage server workloads but virtualization hypervisors were not the endgame of datacenter management. What is the role of server virtualization and hypervisors in the new age of cloud, containers, and more importantly, hyperconvergence?
I covered SAN technology in my last Infrastructure 101 article, so for today I'm going to cover server virtualization and maybe delve into containers and cloud.
Server virtualization as we know it now is based on hypervisor technology. A hypervisor is an operating system that allows sharing of physical computing resources such as networking, CPU, RAM, and storage among multiple virtual machines (sometimes called virtual servers). Virtual machines replaced traditional physical servers that each had their own physical chassis with storage, RAM, networking, and CPU. To understand the importance of hypervisors, let's look at a bit of history.

Almost three years ago, VMware introduced the world to its virtual SAN. This new solution enabled customers to use storage within ESXi servers without the need for external storage – an exciting promise for organizations that wanted to quickly scale their virtual storage. Now, it’s time to check in on this technology and see if it’s living up to its promise.
VMware became a player in the storage array and software market when it launched vSAN. Server admins were looking forward to using vSAN because it gave them a symmetrical architecture that did not require external storage, thus being able to use storage within existing servers. It also doesn’t require specialized storage skills. However, no one solution can be all things to all enterprises, and as enterprises began to deploy vSAN across their environments, they noticed something big was missing.

Enterprise IT has been in the era of Hybrid Cloud for some time now. But it seems most conversations about Hybrid are focused on integrating AWS, Microsoft Azure, or Google ECM into existing on-premises systems. Where is all the Private Cloud? What do technology providers need to do to make their offerings more compelling? How should enterprise IT executives and buyers define their focus, needs, and roadmap, and communicate that clearly to the providers?

Information self-service is undoubtedly one of the main drivers of Modern Data Management. From “data services marketplaces” to “self-service Big Data analytics,” one of the objectives of most data-related initiatives today is to provide business professionals with new ways to solve their information needs with the goals of achieving self-reliance and minimizing the IT bottleneck. However, is it realistic to expect business users to assume this job?
Studies [1] report that more than 60 percent of companies grade their experience with self-service initiatives as “average” or lower, with nearly four out of five (73 percent) claiming that “…it requires more training than expected.” So, what is the problem and what can we do to solve it? Let’s start with the easy part: data visualization, which is the last stage of the data analysis process. Self-service BI tools have been around for some years now, allowing business data analysts to create their own graphical reports. Although those tools are not for every business user, business analysts with data experience, basic knowledge of statistics and a bit of SQL, can use them successfully.

Reality itself is going through a digital transformation thanks to leaps in 3D rendering and the crunch-speed motion feedback data. Although the modern definition of virtual reality (VR) has been making promises for three decades, the emphasis was always on the potential. Now it’s here. This is a tour of the state of VR in 2016 and where developers are taking it as VR spreads far beyond the world of gaming.

Fact: storage performance problems have only gotten more complicated, as applications not only have become largely virtualized, but also have moved to cloud-based infrastructures. Storage performance in virtualized environments isn’t just about IOPS anymore. Instead, you need to guarantee performance for individual VMs, helping applications maintain performance as the number of VMs continues to go up in real time.
In his session at Cloud Expo, Dhiraj Sehgal, Product and Marketing at Tintri, shared success stories from a few folks who have already started using VM-aware storage. By managing storage operations at the VM-level, they’ve been able to solve their most vexing storage problems, and create infrastructures that scale to meet the needs of their applications. Best of all, they’ve got predictable, manageable storage performance – at a level conventional storage can’t match.

Flash storage has become a mainstream technology, with 451 Research expecting the market to reach $9.6 billion by 2020. As the technology becomes less cost-prohibitive, and benefits such as its exponentially greater performance capabilities and simplified process for provisioning and optimizing systems become more sought after, it’s clear that the future of storage is flash. But while some organizations may have taken advantage of the burgeoning technology’s benefits early on, a significant number of companies have yet to make the transition. Your organization may very well fall into this category.

VMware configurations designed to provide high availability often make it difficult to achieve satisfactory performance required by mission-critical SQL Server applications. But what if it were possible to have both high availability and high performance without the high cost and complexity normally required?
This article explores two requirements for getting both for SQL applications, while reducing capital and operational expenditures. The first is to implement a storage architecture within VMware environments designed for both high availability and high performance; the second involves tuning the high availability (HA) and high performance (HP) HA/HP architecture for peak performance.

What happens when the different parts of a vehicle become smarter than the vehicle itself? As we move toward the era of smart everything, hundreds of entities in a vehicle that communicate with each other, the vehicle and external systems create a need for identity orchestration so that all entities work as a conglomerate. Much like an orchestra without a conductor, without the ability to secure, control, and connect the link between a vehicle’s head unit, devices, and systems and to manage the lifecycle of people, systems and devices, transportation and fleet services are at risk of having connected, yet disparate systems.

When it comes to IT infrastructure, there are some big differences in the needs of the SMB vs the enterprise. What might be minor hiccups in the enterprise can be major challenges in the SMB. What are these differences and how should they affect the way solutions are provided?

This is part one of a two-part series of posts about using some common server storage I/O benchmark tools and workload scripts. View part II here which includes the workload scripts and where to view sample results.
There are various tools and workloads for server I/O benchmark testing, validation and exercising different storage devices (or systems and appliances) such as Non-Volatile Memory (NVM) flash Solid State Devices (SSDs) or Hard Disk Drives (HDD) among others.

Incumbent storage vendors such as EMC, Netapp, and Nutanix have built their rich code base on the block layer. This means that the only viable option to use Persistent Memory as a storage tier is to wrap it up with an abstraction layer that will present it as a block device.

For those involved in data management or data infrastructures, the following are five tips to help cut the overhead and resulting impact of digital e-waste and later physical e-waste. Most conversations involving e-waste focus on the physical aspects from disposing of electronics along with later impacts. While physical e-waste is an important topic, lets expand the conversation including other variations of e-waste including digital. By digital e-waste I'm referring to the use of physical items that end up contributing to traditional e-waste.

Containers are rapidly rushing to the fore. They’re the darling du jour of DevOps and it’s a rare conversation on microservices that doesn’t invoke it’s BFF, containers. SDx Central’s recent report on containers found only 17% of respondents that were not considering containers at all. That’s comparable with Kubernetes’ State of the Container World Jan 2016 assertion that 71% of folks were actively using containers, though Kubernetes’ found a much higher percentage of those who say they’re running containers in production (50%) than SDx found (7%).

The business dictionary defines efficiency as the comparison of what is actually produced or performed with what can be achieved with the same consumption of resources (money, time, labor, design, etc.) – Example being : The designers needed to revise the product specifications as the complexity of its parts reduced the efficiency of the product.

Need to test a server, storage I/O networking, hardware, software, services, cloud, virtual, physical or other environment that is either doing some form of file processing, or, that you simply want to have some extra workload running in the background for what ever reason?
Here's a quick and relatively easy way to do it with Vdbench (Free from Oracle). Granted there are other tools, both for free and for fee that can similar things, however we will leave those for another day and post. Here's the con to this approach, there is no Uui Gui like what you have available with some other tools Here's the pro to this approach, its free, flexible and limited by your creative, amount of storage space, server memory and I/O capacity.

Actifio has announced the general availability of Actifio Global Manager (AGM), a web-scale data virtualization solution delivering instant access and radically simple management of application data for business resiliency and test data management across private, public, and hybrid cloud environments.
Over the last 6 years, the Actifio copy data virtualization platform has been deployed in many of the world's largest and most complex enterprise IT organizations and Managed Service Providers (MSPs). It has scaled up to thousands of application instances associated with petabytes of data deployed across private data centers, and hybrid or public cloud environments including Amazon AWS. After an extensive early access program, Actifio has released AGM for general availability for these web-scale environments, delivering Actifio's trademark capabilities of instant application data access, for even very large database instances, all driven by Service Level Agreements (SLAs) extending across the full lifecycle of data from production to retirement.d cloud environments.

From the SD Times March Madness Tournament, to the list of new research from voke, Forrester, and Gartner, to the most crowded sessions at key software testing conferences, service virtualization was a hot topic throughout 2015.
Out of the 129 white papers, articles, videos, and case studies on Parasoft's service virtualization resource center, these 10 were the most popular in 2015.

SUSE® has joined the Open Platform for NFV (OPNFV) project, a carrier-grade, integrated open source platform that is accelerating the introduction of new products and services using network functions virtualization (NFV). The addition of NFV capabilities enhances SUSE's software-defined data center offerings, including OpenStack-based cloud infrastructure and Ceph-based software-defined storage.
"SUSE is extending what we've been doing for years in the mission-critical compute, OpenStack cloud and enterprise storage spaces, bringing carrier-grade technology and service to the software-defined data center," said Nils Brauckmann, president and general manager of SUSE. "Our engagement with the OPNFV project as a platinum member will help accelerate the NFV platform for partners and customers alike."

EMC Corporation, the world leader in information management and storage, today announced that it has been positioned by Gartner, Inc. in the 'Leaders' quadrant in the 'Magic Quadrant for Enterprise Content Management, 2005'(1) report. Gartner Inc.'s Magic Quadrant positioned EMC as an enterprise content management (ECM) leader based on the completeness of its vision and ability to execute that vision. Gartner describes companies listed in the 'Leaders' quadrant as performing well today, having a clear vision of market direction and actively building competencies to sustain their leadership position in the market.

Our guest on the podcast this week is Helen Beal, Head of DevOps at Ranger4 Limited. We discuss how successful DevOps transitions depend on culture, so to start companies must identify their current problem areas. Helen describes the most successful DevOps culture as a place where each individual has autonomy as part of the larger team and where experimentation is encouraged.

Connected things, systems and people can provide information to other things, systems and people and initiate actions for each other that result in new service possibilities. By taking a look at the impact of Internet of Things when it transitions to a highly connected services marketplace we can understand how connecting the right “things” and leveraging the right partners can provide enormous impact to your business’ growth and success. In her general session at @ThingsExpo, Esmeralda Swartz, VP, Marketing Enterprise and Cloud at Ericsson, discussed how this exciting emergence of layers of service offerings across a growing partner ecosystem can be monetized for the benefit of smart digital citizens, enterprises and society.

Overgrown applications have given way to modular applications, driven by the need to break larger problems into smaller problems. Similarly large monolithic development processes have been forced to be broken into smaller agile development cycles. Looking at trends in software development, microservices architectures meet the same demands.
Additional benefits of microservices architectures are compartmentalization and a limited impact of service failure versus a complete software malfunction. The problem is there are a lot of moving parts in these designs; this makes assuring performance complex especially if the services are geographically distributed or provided by multiple third parties.

Want hands-on experience with service virtualization—one of the most exciting new software testing technologies in years? Then don't miss this free Service Virtualization certification program led by Parasoft: the company who pioneered service virtualization in 2002.
After spending 2 hours with top Software Evangelist Arthur "Code Curmudgeon" Hicken, you'll have a core understanding of how your team can use service virtualization to test earlier, faster, and more completely.

Clutch is now a Docker Authorized Consulting Partner, having completed Docker's certification course on the "Docker Accelerator for CI Engagements." More info about Clutch's success implementing Docker can be found here.
Docker is an open platform for developers and system administrators to build, ship and run distributed applications. With Docker, IT organizations shrink application delivery from months to minutes, frictionlessly move workloads between data centers and the cloud and achieve 20x greater efficiency in their use of computing resources. Inspired by an active community and transparent, open source innovation, Docker containers have been downloaded more than 800 million times. Docker also provides enterprise subscriptions that deliver the software, support and maintenance organizations need to deploy a Dockerized application environment.

SYS-CON Events announced today that Interface Masters Technologies, provider of leading network visibility and monitoring solutions, will exhibit at the 17th International CloudExpo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
Interface Masters Technologies is a leading provider of high speed networking solutions focused on Gigabit, 10 Gigabit, 40 Gigabit and 100 Gigabit Ethernet network access and connectivity products. For over 20 years, the company has been providing innovative networking solutions with customization services to OEMs, large enterprises and sophisticated end users. Interface Masters has been an OCP member contributing multiple white box designs to the project while supporting customer SDN development.

Communications cooperative HTC centralizes storage management to gain powerful visibility, reduce costs, and implement IT disaster avoidance capabilities.
We’ll learn more about how HTC lowers total storage utilization cost while bringing in a common management view to improve problem resolution, automate resources allocation, and more fully gain compliance -- as well as set the stage for broader virtualization and business continuity benefits.

IoT is the next development of how the Internet is applied to the world. TAM for M2M/IoT is estimated at $19 trillion. The IoT device count is in the billions but will not traverse the service providers’ networks. Service providers and vendors are struggling to understand how to map the TAM dollars to real use cases, optimal technology approaches, and profitable business models.
In his session at @ThingsExpo, Dennis Ward, IoT analyst, strategist at DWE, will focus on the SP transformations that will occur. In Phase I SP infrastructure virtualization. In Phase II SPs will focus on monetization. The key is IoT Cloud-based Service Centric Use cases.

Coolect, LLC, innovators of digital media management software, announced today the release of a new personal data management software product, Coolect, setting a new benchmark for the collecting, organizing, storing and displaying of digital media.

FileMaker today announced the immediate availability of FileMaker Pro 8, the newest version of the most-awarded desktop database, featuring new ways to work faster, share and manage information of all types, and be more productive.

When I started exploring virtualization, like many folks, I was in awe of how much efficiency came with moving physical servers into VMs. To this day, the number of success stories about improved usage, reduced overhead costs and increased functionality makes virtualization a solid business model for IT folks.
Then I learned about Containerization and well, it takes efficiency to another level.

We really are moving in the direction of truly commoditized hardware. Some uses will always have specific requirements that are not mainstream and thus will require specialized builds, this is true in every industry. But increasingly, who made your hardware and where they got their parts from is a secondary issue.
Which makes one consider what really sells hardware these days. Years ago when I was working for Network Computing, I reviewed a low-end blade server company capable of cranking up blades at a fraction of the cost of most vendors. They (like far too many good companies) ran out of money before they could grab market traction, but they did show that it could be done at a price even small enterprises could afford.

The amount of data processed in the world doubles every three years and a global commitment to open source technology is the way to handle this growth.
An open technology approach fosters innovation through massive community involvement and impedes expensive vendor lock-in. This benefits buyers as markets remain more competitive. In doing so, open standards and technologies also allow for market hypergrowth, and this is the key to handling the growth of data.
A doubling every three years means we'll be grappling with a full Yottabyte of data in the year 2040. That's one billion petabytes, an amount of data that, similar to pondering geologic time, I can understand in the abstract but not truly grasp.
Meanwhile, the nature of this data—which can truly be called Big Data in today's age of the Zettabyte—is transforming from a jet plane model to a chewing gum model.
By this I mean Big Data in its original conception 20 years ago referred to a small number of massive files, the type found in meteorology and nuclear-bomb building. Tomorrow's Big Data will largely be a product of billions of sensors, transmitting less than 10K at a time. Rather than thinking about a few 747s, we'll be thinking about billions of pieces of chewing gum.
Already There
We're already in such a Little Big Data era, with stuff like Hadoop and NoSQL databases equipped to handle the onslaught of data volume, variety, and--because of much this data's real-time nature—velocity.
But we've only just begun. Innovation must continue apace, in hardware even more so than software. Today's technology already requires more almost 3% of the world's electricity grid to power its data centers—exponential increases in data processing simply will not be met by the global electricity grid in the absence of vast new hardware efficiencies.
The OPG
Thus I'm involved with something called the Open Performance Grid, or OPG. Announced in San Francisco in August 2015, the Open Performance Grid measures openness, performance, and leadership of hardware, software, and designs for modern data centers.
The OPG is a community effort with input from technology users and buyers, analysts and researchers, and vendors who wish to compare their own self-assessments with what the community is saying.
Sample measures of openness, beyond simple open-source availability, include the presence, size, and activity of a community and foundation for a particular technology. Market share, benchmark performance, and what we call the Innovation Curve are also part of the mix.
Software categories include operating systems, virtualization, containers, PaaS, IaaS services and stacks, monitoring/analytics, management consoles, software-defined storage, SDN, SDDC. For hardware, we're looking at chips, boards, subsystems, and even overall data center designs.
The challenge of meeting the astounding growth in data is enormous. The Open Performance Grid is [...]

Services providers have traditionally organized the management and operation of different technologies into several teams with very specific domain knowledge. These teams have been staffed with specialists looking after routing, network services, servers, virtualization, storage area networks, security and various other technology domains. Over time, these functional teams have had the tendency to develop into loosely tied silos.

Mobile devices. Cloud-based services. The Internet of Things. What do all of these trends have in common? They are some of the factors driving the unprecedented growth of data today. And where data grows, so does the need for data storage. The traditional method of buying more hardware is cost-prohibitive at the scale needed. As a result, a new storage paradigm is required.
Enterprises today need flexible, scalable storage approaches if they hope to keep up with rising data demands. Software-defined storage (SDS) offers the needed flexibility. In light of the varied storage and compute needs of organizations, two SDS options have arisen: hyperconverged and hyperscale. Each approach has its distinctive features and benefits, which are explored below.

I’ve always had a fascination with the way information is acquired and process. Reading back through the history of this site, you can see this tendency towards more fanciful thinking, e.g. GPGPU assisted network analytics, future storage systems using Torrenza-style processing. What has once been theory has made its way into the realm of praxis; looking no further than ICML 2015, for example, to see the forays into DML that nVidia is making with their GPUs. And on the story goes. Having said all this, there are elements of data, of data networking, of data processing, which, to date, have NOT gleaned all the benefits of this type of acceleration. To that end, what I am going to attempt to posit today is an area where Neural Networking (or at least the benefits therein) can be usefully applied to an area interacted with every single nanosecond of every day: the network.

SYS-CON Events announced today that the "First Containers & Microservices Conference" will take place June 9-11, 2015, at the Javits Center in New York City. The “Second Containers & Microservices Conference” will take place November 3-5, 2015, at Santa Clara Convention Center, Santa Clara, CA.
Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities.

SYS-CON Events announced today that WSM International (WSM), the world’s leading cloud and server migration services provider, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY.
WSM is a solutions integrator with a core focus on cloud and server migration, transformation and DevOps services.

The Citrix X1 Mouse dramatically improves the user experience of any remote Windows app or desktop delivered to an iPad via Citrix and makes anyone more productive.
At Citrix, we’ve been helping people access and use business apps on any device for years. Yet many of our customers depend on Windows-based applications that are hard to use on iPad and Android tablets, because so many features depend on the point-and-click simplicity and accuracy of a physical mouse.

We heard for many years how developing nations would be able to develop mobile-phone networks quickly, perhaps even leapfrog developed nations, because their lack of traditional, wired networks would not inhibit them from deploying the new technology.
Now there is talk of history repeating itself with the Industrial Internet--a key aspect of the emerging Internet of Things. For example, Guo Ping, Deputy Chairman of the Board of Chinese electronics giant Huawei, said in a recent report from the World Economic Forum, "The Industrial Internet will afford emerging markets a unique opportunity to leapfrog developed countries in digital infrastructure," says a guy from Chinese giant Huawei in this report.
To some degree the first prediction turned out to be true, as mobile communications have become well established in many developing countries, and mobile phones the first phones ever used by perhaps 2 billion people. Our ongoing research at the Tau Institute shows that, indeed, developing nations in several regions are the most dynamic among all nations of the world.
Unleashing Potential
Now, with the Industrial Internet, no less a pontificator than Salesforce CEO Marc Benioff pronounced the IoT “ground zero for a new phase of global transformation...reshaping industries,” in the same WEF report.
This particular report, entitled “Industrial Internet of Things: Unleashing the Potential of Connected Products and Services,” cites operational efficiency, connected ecosystems, software platforms, collaboration between humans and machines, and something called the outcome economy as the key opportunities afforded by the Industrial Internet.
(“Outcome economy” is some mumbo-jumbo invented by the report’s collaborator, Accenture, and seems to mean that feedback from the IoT will provide companies with new insights that let them create products and services that will better meet customers’ outcomes. Perhaps pharmaceutical companies in the past, for example, were unclear that their customers wanted to feel better.)
In any case, the touted new efficiencies of the IoT in general and the Industrial(ized) Internet in particular do seem to hold promise to bring new productivity--and if history is a guide, economic growth--to nations that move toward the IoT aggressively.
Healthy Growth
Economic growth without increased economic parity and social development will be the empty calories of this new global development engine: if bigger just means fatter, then nations will be hurting themselves over the long term.
This is one of our concerns about recent economic growth in the Philippines, for example. It’s widely reported that the administration of President Noynoy Aquino--which runs from 2010-2016 in [...]