Plaster Grouphttp://plastergroup.com
Business Intelligence and Information ManagementWed, 16 Aug 2017 20:46:03 +0000en-UShourly1https://wordpress.org/?v=4.8Plaster Group’s First Volunteer Day!http://plastergroup.com/plaster-groups-first-volunteer-day/
http://plastergroup.com/plaster-groups-first-volunteer-day/#respondWed, 16 Aug 2017 20:46:03 +0000http://plastergroup.com/?p=43872One of the most popular suggestions to improve our workplace environment on last year’s employee satisfaction survey? That Plaster Group should volunteer our time somewhere as a team. So we took our consultants up on their suggestion and did just that. Did you know 1 in 5 Washingtonians relies on their local food bank? This Saturday, Plaster Group consultants and their families were warmly welcomed by Northwest Harvest, the only nonprofit food bank distributor operating statewide in Washington. With a network of 375 food banks, Northwest Harvest is able to feed a family of three a nutritional meal with only 67 cents due to the work provided by volunteers. We had a great time getting a mountain of beans packaged through coordination and teamwork!

]]>One of the most popular suggestions to improve our workplace environment on last year’s employee satisfaction survey? That Plaster Group should volunteer our time somewhere as a team. So we took our consultants up on their suggestion and did just that.

Did you know 1 in 5 Washingtonians relies on their local food bank? This Saturday, Plaster Group consultants and their families were warmly welcomed by Northwest Harvest, the only nonprofit food bank distributor operating statewide in Washington. With a network of 375 food banks, Northwest Harvest is able to feed a family of three a nutritional meal with only 67 cents due to the work provided by volunteers. We had a great time getting a mountain of beans packaged through coordination and teamwork!

]]>http://plastergroup.com/plaster-groups-first-volunteer-day/feed/0Plaster Group is one of Washington’s Best Places to Work!http://plastergroup.com/plaster-group-one-washingtons-best-places-work/
http://plastergroup.com/plaster-group-one-washingtons-best-places-work/#respondMon, 07 Aug 2017 21:32:55 +0000http://plastergroup.com/?p=43868Plaster Group has been a finalist on the Puget Sound Business Journal’s WA Best Places to Work three times, and last Thursday we were recognized as the second best medium-sized company to work for in Washington. We are very proud of this distinction as we consistently seek to serve our Consultants, our Clients, and the Community. Read more about why we are one Washington’s Best Places to Work here!

]]>Plaster Group has been a finalist on the Puget Sound Business Journal’s WA Best Places to Work three times, and last Thursday we were recognized as the second best medium-sized company to work for in Washington. We are very proud of this distinction as we consistently seek to serve our Consultants, our Clients, and the Community. Read more about why we are one Washington’s Best Places to Work here!

]]>http://plastergroup.com/plaster-group-one-washingtons-best-places-work/feed/0Summer 2017 Baseball Game!http://plastergroup.com/summer-2017-baseball-game/
http://plastergroup.com/summer-2017-baseball-game/#respondMon, 24 Jul 2017 23:10:23 +0000http://plastergroup.com/?p=43853 Plaster Group had a great time at Safeco Field this weekend watching the Mariners play the Yankees at our annual summer get-together. We have the best consultants in Seattle!

Plaster Group had a great time at Safeco Field this weekend watching the Mariners play the Yankees at our annual summer get-together. We have the best consultants in Seattle!

]]>http://plastergroup.com/summer-2017-baseball-game/feed/0Agile Nirvanahttp://plastergroup.com/agile-nirvana/
http://plastergroup.com/agile-nirvana/#respondMon, 03 Jul 2017 23:30:47 +0000http://plastergroup.com/?p=43832by Shama Bole, Sr. Agile Consultant Growing up, I remember being told the story of a village boy in India who, at a very tender age, wrote a famous treatise summarising his meaning-of-life philosophy and conclusions, and in the wake of that, sought samadhi (death via unending meditation) in a nearby cave that he requested be sealed thereafter. (He has been revered as a saint since the 13th century.) Apparently, life held no more questions, surprises, puzzles worth the pondering. Unfortunately, people with a similar degree of certitude about their adoptive philosophy are rarely obliging enough to hibernate in meditative solitude. Instead, they evangelise. Nowhere is this truer than in the Agile landscape. (Well, that’s an exaggeration, given the torridity of political and religious evangelising, but it makes my point.) I think this rather stream-of-consciousness article/gripe was borne out of some frustration with Agile converts and sometimes uninformed zealotry (which sounds breathtakingly presumptuous, I know). But, hear me out. Art, Not Science Agile, like project management, is more an art than science and draws more than its share of argumentative fanatics. (This may explain why physical science practitioners seem saner since they deal with absolutes, though Copernicus, Galileo et al might have deeply ironic posthumous thoughts to share on that). Art seems to have a greater margin for opinions and ambiguity and that may be the source of so much dissent within the ranks of Agilists. Natural science absolutely has its own paradigms but there is a grounding in theory and empirically measurable outcomes that is extremely challenging to achieve outside of the laboratory. Agile is very much about people, and relationships are messy things. There is no science around resolution and no clear path to some universal truth, as may be found in managing budgets or PNL statements. Dialectics is critical to intellectual progress but Agile thinkers often seem blind to all views but their own. Unchecked passion Sounds thrilling, doesn’t it? Not quite. My company is often engaged to effect Agile transformation, and we’re always looking to bring on folks who can accomplish that. When hiring, we look for not just competence and expertise but also a level of polish combined with amicability. Inevitably, “a spoonful of sugar makes the medicine go down”, one of the harsher truths in life being that any kind of change is easier when shoehorned by someone personable. It’s not the strength of one’s convictions that sells but rather the efficacy of one’s approach. I’m sure this is true in matters of the heart as well! Too many Agilists seem to forget/ignore this principle and indulge in passion unchecked by any veneer of professionalism. Old wine in new bottles A lot of Agile presentations and written material is essentially nothing more profound than old wine in new bottles. I saw this in graduate school, where we students floundered for a dissertation topic on which to make our mark in academia. It wasn’t about original thinking, but rather about leveraging intellectual capital to launch oneself out of obscurity. To illustrate this: at one of the conferences I attended, an Agilist gave a talk predicated on the notion that there were no values prescribed by Agile. To me, that seemed a case of not seeing the forest for the trees. Agile is a philosophy, a set of values and a suggested way of interacting with the world. It’s a directional compass and a behavioural guide. No values? More emotional than cerebral To a lot of practitioners, Agile philosophy comes as epiphany, and I don’t use the word lightly. After all, what comes to your mind when you think of epiphany? To me it is the image of Archimedes shrieking “Eureka!” as he ran through the streets of Syracuse, naked from his bath, in a state of mental elevation that obliterated everyday habits of common sense and rational behaviour … a possibly apocryphal tale but who cares? What a great story! Adopting Agile is beyond just methodology – it is an inner transformation that engages one’s passions and beliefs and value systems. To me, that puts it outside the realm of a purely cerebral exercise in project delivery. Some of the fallout/passion is attributable to that. Hubris Other than (excess) passion, Agilists also tend to suffer from an excess of certitude and a tendency to dismiss everyone else’s opinion and store of knowledge. I’ve seen CVs posted where Agilists proclaim themselves “Masters” of Agile. Where does this hubris come from? I listened to a hilarious segment by a stand-up comedian who explains the inexplicable success of political rhetoric/pap with the following theory: people tend to see as truth (only) what they can in fact understand. However, truth can and often does come in complex and no-directions-provided packages as well. Interestingly enough, though, understanding Agile is not hard. Implementing it is tricky. The best (Agile) coaches I know are humble and genuinely put other people and their interests first. They get it. Low Entry Point Because Agile reads as easy, one other big casualty has been the entry of unqualified and unsuitable practitioners into the field. Maybe this is akin to why there are so many charlatans in what is called “alternative” medicine such as homeopathy and acupuncture. There are a lot of ways and reasons to explain away why outcomes are not as expected. Of course, it is circular reasoning that if Agile were practiced correctly then outcomes could always be predicted (as desired). This is why it also baffles executives who are interested only in outcomes – totally understandable – and have only a foggy conception of what Agile means. And proceed, therefore, to harry and micromanage it to death. Team-Building, not reputation-building Finally, to me a good Agile coach is like a parent. It’s the daily little disciplines and acts of patience, humility (and dare I say) suffering that brings together a community of people embarked on a convergent goal. It’s not about making your mark via displays of bombastic self-virtuosity. In conclusion: …Read More

Growing up, I remember being told the story of a village boy in India who, at a very tender age, wrote a famous treatise summarising his meaning-of-life philosophy and conclusions, and in the wake of that, sought samadhi (death via unending meditation) in a nearby cave that he requested be sealed thereafter. (He has been revered as a saint since the 13th century.) Apparently, life held no more questions, surprises, puzzles worth the pondering.

Unfortunately, people with a similar degree of certitude about their adoptive philosophy are rarely obliging enough to hibernate in meditative solitude. Instead, they evangelise. Nowhere is this truer than in the Agile landscape. (Well, that’s an exaggeration, given the torridity of political and religious evangelising, but it makes my point.)

I think this rather stream-of-consciousness article/gripe was borne out of some frustration with Agile converts and sometimes uninformed zealotry (which sounds breathtakingly presumptuous, I know). But, hear me out.

Art, Not Science

Agile, like project management, is more an art than science and draws more than its share of argumentative fanatics. (This may explain why physical science practitioners seem saner since they deal with absolutes, though Copernicus, Galileo et al might have deeply ironic posthumous thoughts to share on that). Art seems to have a greater margin for opinions and ambiguity and that may be the source of so much dissent within the ranks of Agilists. Natural science absolutely has its own paradigms but there is a grounding in theory and empirically measurable outcomes that is extremely challenging to achieve outside of the laboratory. Agile is very much about people, and relationships are messy things. There is no science around resolution and no clear path to some universal truth, as may be found in managing budgets or PNL statements. Dialectics is critical to intellectual progress but Agile thinkers often seem blind to all views but their own.

Unchecked passion

Sounds thrilling, doesn’t it? Not quite. My company is often engaged to effect Agile transformation, and we’re always looking to bring on folks who can accomplish that. When hiring, we look for not just competence and expertise but also a level of polish combined with amicability. Inevitably, “a spoonful of sugar makes the medicine go down”, one of the harsher truths in life being that any kind of change is easier when shoehorned by someone personable. It’s not the strength of one’s convictions that sells but rather the efficacy of one’s approach. I’m sure this is true in matters of the heart as well! Too many Agilists seem to forget/ignore this principle and indulge in passion unchecked by any veneer of professionalism.

Old wine in new bottles

A lot of Agile presentations and written material is essentially nothing more profound than old wine in new bottles. I saw this in graduate school, where we students floundered for a dissertation topic on which to make our mark in academia. It wasn’t about original thinking, but rather about leveraging intellectual capital to launch oneself out of obscurity.

To illustrate this: at one of the conferences I attended, an Agilist gave a talk predicated on the notion that there were no values prescribed by Agile. To me, that seemed a case of not seeing the forest for the trees. Agile is a philosophy, a set of values and a suggested way of interacting with the world. It’s a directional compass and a behavioural guide. No values?

More emotional than cerebral

To a lot of practitioners, Agile philosophy comes as epiphany, and I don’t use the word lightly. After all, what comes to your mind when you think of epiphany? To me it is the image of Archimedes shrieking “Eureka!” as he ran through the streets of Syracuse, naked from his bath, in a state of mental elevation that obliterated everyday habits of common sense and rational behaviour … a possibly apocryphal tale but who cares? What a great story! Adopting Agile is beyond just methodology – it is an inner transformation that engages one’s passions and beliefs and value systems. To me, that puts it outside the realm of a purely cerebral exercise in project delivery. Some of the fallout/passion is attributable to that.

Hubris

Other than (excess) passion, Agilists also tend to suffer from an excess of certitude and a tendency to dismiss everyone else’s opinion and store of knowledge. I’ve seen CVs posted where Agilists proclaim themselves “Masters” of Agile. Where does this hubris come from? I listened to a hilarious segment by a stand-up comedian who explains the inexplicable success of political rhetoric/pap with the following theory: people tend to see as truth (only) what they can in fact understand. However, truth can and often does come in complex and no-directions-provided packages as well. Interestingly enough, though, understanding Agile is not hard. Implementing it is tricky. The best (Agile) coaches I know are humble and genuinely put other people and their interests first. They get it.

Low Entry Point

Because Agile reads as easy, one other big casualty has been the entry of unqualified and unsuitable practitioners into the field. Maybe this is akin to why there are so many charlatans in what is called “alternative” medicine such as homeopathy and acupuncture. There are a lot of ways and reasons to explain away why outcomes are not as expected. Of course, it is circular reasoning that if Agile were practiced correctly then outcomes could always be predicted (as desired). This is why it also baffles executives who are interested only in outcomes – totally understandable – and have only a foggy conception of what Agile means. And proceed, therefore, to harry and micromanage it to death.

Team-Building, not reputation-building

Finally, to me a good Agile coach is like a parent. It’s the daily little disciplines and acts of patience, humility (and dare I say) suffering that brings together a community of people embarked on a convergent goal. It’s not about making your mark via displays of bombastic self-virtuosity.

In conclusion: I picked the term “nirvana” almost unconsciously, but I think one aspect that calls to me is the constant striving. To be better, to keep trying, to never rest on one’s laurels. And liberation from painful habits and practices that keep us chained to mediocrity and hamper self-actualisation. (The term “best practice”, which is bandied about with unctuous glee, also makes me cringe nervously.)

Agile nirvana is a lofty ideal but one where the journey is as rich with rewards as the destination. And, my respects to the (very few) coaches who make trustworthy companions on this sojourn.

Unsure of how to begin your journey to Agile nirvana? Our experienced Agile Team would be happy to provide you with a guide! Contact us today.

]]>http://plastergroup.com/agile-nirvana/feed/0How Two Necks in Redshift can help you from Losing your Head in Tableauhttp://plastergroup.com/how-two-necks-in-redshift-can-help-you-from-losing-your-head-in-tableau/
http://plastergroup.com/how-two-necks-in-redshift-can-help-you-from-losing-your-head-in-tableau/#respondThu, 29 Jun 2017 19:10:13 +0000http://plastergroup.com/?p=43830by Colin Carson, Sr. Business Intelligence Consultant Abstract When a curated data set is needed for reporting, a common design pattern is to create a view in the same database as the warehouse. This view serves as the endpoint or “head” for BI tools and end-users. This “one view in the warehouse db” pattern (a view in the warehouse’s database) provides a layer of abstraction from underlying warehouse structures–the facts and dimensions in a star schema–but is not always the best approach due to several weaknesses (detailed below). Here we propose a high-level “two necks” design pattern with six parts: a dedicated reporting data mart database, a view in this reporting database, two staging tables, a view on top that serves as the single head or endpoint for users, and a processing approach that swaps between one of the two neck tables. Assumptions We assume some or all of the negatives in the Antipattern section are things your business wants to avoid. We assume you desire a live connection to the warehouse rather than Tableau data extracts. In our solution, we found extracts performed slightly faster but the data could not be refreshed as frequently as our etl that created our user-facing endpoint view. Our ETL process ran microbatches every 30 seconds using a shell script that used psql to call sql commands. We assume the cost of adding additional steps to your etl logic to swap tables is worth the benefits. Our implementation of this pattern used a Redshift warehouse because the transactional source data and etl was already in AWS, and our dashboard used aggregations over potentially many rows (less than a million to start with, but potentially hundreds of millions). There are other databases that could be used for warehousing, and other databases might be more appropriate if high concurrency is required: Redshift supports 500 concurrent user connects, Aurora up to 16000, and when Tableau server is used with Redshift the default parallel query limit is 8. We used Tableau for the dashboard, but this pattern can be applied to other frontend presentation tools too. Antipattern A view in the warehouse’s database is simple and can be implemented quickly, but it has weaknesses: Tightly coupled: changes for end-users need to happen to the view in the warehouse’s database. Because downstream BI tools or single-tenant data marts are pointed to a view in the warehouse’s database, it means the end-users and warehouse are tightly coupled. When significant warehouse design changes need to occur, there is downtime and users are impacted. Conversely, when design changes are needed on the reporting side, the view in the warehouse’s database would need to be changed (because the reporting structures live in the warehouse, not in their own separate reporting database). Reporting changes cannot be made independently of warehouse changes. For more information about a reporting database, one resource is Martin Fowler’s writeup here. Fully populating large tables takes time, and users are impacted if they have to wait for the load to finish. Users will see zero rows between the time the table is truncated and fully populated, which can happen frequently in a near real time pipeline (eg every 30 seconds). The duration of the refresh can be lengthy, because the table needs to be fully populated during this step (the database might have to do a lot of work). This processing delay for warehouse base tables can be lessened if processing involves a merge/upsert approach as documented here rather than a full load/destructive refresh; nevertheless, users do not necessarily not have to wait if processing is happening. No backup. There is only a single copy of the data that users access at any time. When the warehouse is processed, no snapshot is saved of a previous state, which can be potentially restored if there is a problem. If there is a problem with the etl job processing the warehouse structures (such between the time a base table is truncated and fully populated, or if there is a bug with merge/upsert logic), there is no copy of data in the system and end-users are impacted (reporting is inaccessible). Avoid a performance hit to the warehouse and speed up user queries. Users can negatively impact warehouse performance, and conversely warehouse processing can affect users. This is related to #1 but slightly different: where #1 is more about intermittent design changes, this item (#4) is about the performance experience by end-users. Downstream dependencies such as data marts, BI tools, and end-users are querying the warehouse indirectly through the view. There is no middle step between users and the warehouse, and no physical instantiation of the data assembled for reporting needs. Recommended Pattern and how it can be Implemented The weaknesses above can be mitigated with the two neck design shown in the diagram at the beginning of this article. Here’s how you can implement this solution architecture in Redshift: 1. Create a reporting database. Example: CREATE SCHEMA report_schema; 2. Create a reporting view to serve as the integration component between the reporting data mart and the warehouse: CREATE VIEW report_schema.v_example_dashboard_loader AS /*enter your logic here*/ Our view used 4 tables (1 fact and 3 dimensions), with 2 dimension tables with outer joins. We had 6 calculated fields with aggregations (3 DENSE_RANK functions, 2 SUM, and 1 MAX). and 24 columns. The view returned 539,000 rows in 2.376 seconds using a dc1.large single cluster. 3. Create the loader and backup tables. These two tables are the two necks. CREATE TABLE report_schema.t_example_dashboard_loader AS SELECT * FROM report_schema.v_example_dashboard_loader; 4. CREATE TABLE report_schema.t_example_dashboard_loader_backup AS SELECT * FROM report_schema.t_example_dashboard_loader;–notice backup looks at the local primary table, not the view that hits the warehouse with a query that can have expensive logic 5. Create the “head” view that services as the endpoint for Tableau: CREATE OR REPLACE VIEW report_schema.v_example_dashboard_userfacing AS SELECT * FROM report_schema.t_example_dashboard_loader; 6. In the etl sql script, use logic that takes a backup, then swaps the user facing view to the backup, then loads …Read More

Abstract

When a curated data set is needed for reporting, a common design pattern is to create a view in the same database as the warehouse. This view serves as the endpoint or “head” for BI tools and end-users. This “one view in the warehouse db” pattern (a view in the warehouse’s database) provides a layer of abstraction from underlying warehouse structures–the facts and dimensions in a star schema–but is not always the best approach due to several weaknesses (detailed below). Here we propose a high-level “two necks” design pattern with six parts: a dedicated reporting data mart database, a view in this reporting database, two staging tables, a view on top that serves as the single head or endpoint for users, and a processing approach that swaps between one of the two neck tables.

Assumptions

We assume some or all of the negatives in the Antipattern section are things your business wants to avoid.

We assume you desire a live connection to the warehouse rather than Tableau data extracts. In our solution, we found extracts performed slightly faster but the data could not be refreshed as frequently as our etl that created our user-facing endpoint view. Our ETL process ran microbatches every 30 seconds using a shell script that used psql to call sql commands.

We assume the cost of adding additional steps to your etl logic to swap tables is worth the benefits.

We used Tableau for the dashboard, but this pattern can be applied to other frontend presentation tools too.

Antipattern

A view in the warehouse’s database is simple and can be implemented quickly, but it has weaknesses:

Tightly coupled: changes for end-users need to happen to the view in the warehouse’s database. Because downstream BI tools or single-tenant data marts are pointed to a view in the warehouse’s database, it means the end-users and warehouse are tightly coupled. When significant warehouse design changes need to occur, there is downtime and users are impacted. Conversely, when design changes are needed on the reporting side, the view in the warehouse’s database would need to be changed (because the reporting structures live in the warehouse, not in their own separate reporting database). Reporting changes cannot be made independently of warehouse changes. For more information about a reporting database, one resource is Martin Fowler’s writeup here.

Fully populating large tables takes time, and users are impacted if they have to wait for the load to finish. Users will see zero rows between the time the table is truncated and fully populated, which can happen frequently in a near real time pipeline (eg every 30 seconds). The duration of the refresh can be lengthy, because the table needs to be fully populated during this step (the database might have to do a lot of work). This processing delay for warehouse base tables can be lessened if processing involves a merge/upsert approach as documented here rather than a full load/destructive refresh; nevertheless, users do not necessarily not have to wait if processing is happening.

No backup. There is only a single copy of the data that users access at any time. When the warehouse is processed, no snapshot is saved of a previous state, which can be potentially restored if there is a problem. If there is a problem with the etl job processing the warehouse structures (such between the time a base table is truncated and fully populated, or if there is a bug with merge/upsert logic), there is no copy of data in the system and end-users are impacted (reporting is inaccessible).

Avoid a performance hit to the warehouse and speed up user queries. Users can negatively impact warehouse performance, and conversely warehouse processing can affect users. This is related to #1 but slightly different: where #1 is more about intermittent design changes, this item (#4) is about the performance experience by end-users. Downstream dependencies such as data marts, BI tools, and end-users are querying the warehouse indirectly through the view. There is no middle step between users and the warehouse, and no physical instantiation of the data assembled for reporting needs.

Recommended Pattern and how it can be Implemented

The weaknesses above can be mitigated with the two neck design shown in the diagram at the beginning of this article. Here’s how you can implement this solution architecture in Redshift:

1. Create a reporting database. Example:

CREATE SCHEMA report_schema;

2. Create a reporting view to serve as the integration component between the reporting data mart and the warehouse:

Our view used 4 tables (1 fact and 3 dimensions), with 2 dimension tables with outer joins. We had 6 calculated fields with aggregations (3 DENSE_RANK functions, 2 SUM, and 1 MAX). and 24 columns. The view returned 539,000 rows in 2.376 seconds using a dc1.large single cluster.

3. Create the loader and backup tables. These two tables are the two necks.

CREATE TABLE report_schema.t_example_dashboard_loader

AS

SELECT *

FROM report_schema.v_example_dashboard_loader;

4. CREATE TABLE report_schema.t_example_dashboard_loader_backup

AS

SELECT *

FROM report_schema.t_example_dashboard_loader;–notice backup looks at the local primary table, not the view that hits the warehouse with a query that can have expensive logic

5. Create the “head” view that services as the endpoint for Tableau:

CREATE OR REPLACE VIEW report_schema.v_example_dashboard_userfacing

AS

SELECT *

FROM report_schema.t_example_dashboard_loader;

6. In the etl sql script, use logic that takes a backup, then swaps the user facing view to the backup, then loads the primary, then swaps to the primary. (Note: merge/upsert approaches can potentially be used here as well.)

TRUNCATE TABLE report_schema.t_example_dashboard_loader_backup;

INSERT INTO report_schema.t_example_dashboard_loader_backup

SELECT * FROM report_schema.t_example_dashboard_loader;

VACUUM report_schema.t_example_dashboard_loader_backup;

CREATE OR REPLACE VIEW report_schema.v_example_dashboard_userfacing

AS

SELECT *

FROM report_schema.t_example_dashboard_loader_backup;

TRUNCATE TABLE report_schema.t_example_dashboard_loader;

INSERT INTO report_schema.t_example_dashboard_loader

SELECT * FROM report_schema.v_example_dashboard_loader;

VACUUM report_schema.t_example_dashboard_loader;

CREATE OR REPLACE VIEW report_schema.v_example_dashboard_userfacing

AS

SELECT * FROM report_schema.t_example_dashboard_loader;

If we define “one neck” as one staging table (no view on top), and “two necks” as two staging/neck tables with one head view on top (which swaps between the neck tables), we can compare processing time.

a. One neck

TRUNCATE TABLE report_schema.t_example_dashboard_loader;

INSERT INTO report_schema.t_example_dashboard_loader

SELECT * FROM report_schema.v_example_dashboard_loader;

1. 7686ms

2. 6271ms

3. 6109ms

average: 6689ms, or about 6.689 seconds

b. Two necks (only the duration of the statements that affect the user-facing table need to be compared, because only these steps affect users).

CREATE OR REPLACE VIEW report_schema.v_example_dashboard_userfacing

AS

SELECT *

FROM report_schema.t_example_dashboard_loader_backup;

CREATE OR REPLACE VIEW report_schema.v_asset_dashboard_userfacing

AS

SELECT *

FROM report_schema.t_asset_dashboard_loader;

2462ms

1221ms

1405ms

average: 1696ms, or 1.696 seconds

Note: when we doubled or even quadrupled the processing power for the cluster size, and added another cluster node, we did not see clear performance improvements.

Conclusion

In our test results, the “two necks” approach addresses all four antipattern points introduced above: avoid tight coupling, decrease process time and user waiting, have a backup, and speed up user queries. We share these test results as an example where this pattern worked for us to challenge a common pattern (a view in the warehouse) so you can consider this two necks pattern for your own solution architecture.

Regarding processing time, the fastest run of two necks (with swapping the view to the staging table not undergoing processing) is more than 5x faster than the fastest run of the old way. On average, the new way is almost 4x faster. We expect these numbers to be more significant with larger data sets in billions of rows, after accumulation for months or years of warehouse data, rather than in our warehouse that was new. Our test results and sql durations are based on actual client test results based on one view, and our results are not necessarily representative of yours.

]]>http://plastergroup.com/how-two-necks-in-redshift-can-help-you-from-losing-your-head-in-tableau/feed/0Designing Customer Experiencehttp://plastergroup.com/designing-customer-experience/
http://plastergroup.com/designing-customer-experience/#respondThu, 18 May 2017 18:05:02 +0000http://plastergroup.com/?p=43819by Doug Ugarte, Supply Chain Consultant Designing and managing customer experience is a lot like chauffeuring your customer safely to their destination, when suddenly you have a flat tire… on the highway… at night… and have only a crescent wrench and flashlight to replace it. Did we mention that during this process, your customer should be relaxing in the backseat, blissfully unaware of the mission-critical upgrade you are performing under a constrained schedule? If you’re involved in designing or supporting customer experience, you’ve probably contributed to the Herculean efforts that go into producing delighted customers. Whether your role sits in Supply Chain, Marketing, or Customer Service centers, a tremendous amount of focus is applied to the design and management of customer experience. Interestingly, we see each of these teams having varying degrees of ownership of designing customer experience, crafting solutions, implementing services, and managing toward a predictable and reliable result. How are you approaching the customer experience? Did you take an iterative approach based on customer feedback and/or competitive benchmarking? Or did you begin with a vision of the end to end experience and implement a solution to support it? If you expect your customer to choose you for their next purchase, it is paramount that you deliver on your customer experience promise. Regardless of your product or industry, customers expect and demand flawless execution. eCommerce customers expect to know when their product will arrive before they even purchase it. After purchasing it, they want to know that it shipped on time, who is delivering it, its transit status, and that it arrived…all in real time. Manufacturing and commercial customers rely on you to deliver on your promises so that they can meet their own customer commitments. You fail once and you might get a second chance. You fail twice and you’re done. Different departments could have ownership over designing customer experience. It may be Marketing because this could be a branding function at your company. It could be the Customer Service group because they are closest to managing the reality of customer experience. Or it may be your Supply Chain organization because most of the building blocks of the solution align to their charter. Regardless of the owner, a successful customer experience requires these teams to perform in concert to decide what to implement, execute the solution, and drive continuous improvement. To be successful, you must have these core competencies: Know what you can promise Clearly articulate the experience that the Customer can expect Keep your customer informed of progress of their purchase Deal with issues as soon as they arise. Reset customer expectations if you are unable to resolve the issue behind the scenes Looking at the list of core competencies, everything falls into one of two themes – external customer communications or internal operations – that determine what can be promised and the certitude of delivering on that promise. If you pull back the curtain at leading supply chains, you will find a customer promise that is built on knowing that product is available, how long it will take to ship, and how long it will take to arrive at the customer’s doorstep. The building blocks of these core competencies comprise the following strategies: Formulate your Customer Promise You can have a customized promise to each customer based on real time supply chain insights. Alternatively, you can provide a static offer to all customers and the offer can be adjusted if there are constraints affecting delivery. Inventory Availability – You have unreserved and unrestricted product to ship. Logistics – Calculate time to delivery. Based on ship from/to addresses, calculate delivery commitment based on customer-determined shipping service level. Fulfillment Center – Visibility to operational performance. If SLAs are not being met, you might need to adjust the customer promise up front rather than apologizing for a missed commitment. Visibility to Execution Integration across all internal and external partners designed to provide real time visibility. Integration should tell you 1) you have a problem, 2) what problem you have, 3) the scope of the problem, and 4) what time it occurred. Visualization tools help you see the issue and if it is growing. This is where “batch” processing becomes the enemy to customer satisfaction. For every minute you don’t know about a problem that has already started, you diminish your ability to fix it without impacting the customer. Exceptions Management Identify your most common problems and incorporate automated exception handlers to resolve problems before they impact your customer experience. Examples include: Automatically upgrading your shipping service level from 2-day to Overnight because the fulfillment center is a day late. Or, automatically sending a replacement shipment if the logistics provider system shows the item lost or damaged in transit. If you cannot meet your customer commitment, reset expectations based on real time insights (same logic as formulating the original promise). Automated messaging keeps your customer informed and reduces the strain on customer service resources. Leverage data insights from your Customer Service organization for feedback on how customers feel about your designed experience and how reliably you are meeting it. Social media analytics can offer additional visibility to what is being said about your company, your customer policies, customer experiences, and more. You should also assess your competition to understand what you’re up against. From competitive analysis, social media analytics, and customer feedback given directly to your company, you can garner a good understanding of purchasing drivers, how customers view you versus your competition, and how customer perception aligns to your designed customer experience. The more granular view you have into the systems and partners involved in the end to end experience the better you will be able to 1) compare customer perception to reality, and 2) evolve your supply chain to be more closely aligned with the reasons customers buy from you instead of your competitor. Customer experience requires close collaboration between departments that sometimes measure success quite differently. It requires a complex technical solution to align directly to …Read More

Designing and managing customer experience is a lot like chauffeuring your customer safely to their destination, when suddenly you have a flat tire… on the highway… at night… and have only a crescent wrench and flashlight to replace it. Did we mention that during this process, your customer should be relaxing in the backseat, blissfully unaware of the mission-critical upgrade you are performing under a constrained schedule? If you’re involved in designing or supporting customer experience, you’ve probably contributed to the Herculean efforts that go into producing delighted customers.

Whether your role sits in Supply Chain, Marketing, or Customer Service centers, a tremendous amount of focus is applied to the design and management of customer experience. Interestingly, we see each of these teams having varying degrees of ownership of designing customer experience, crafting solutions, implementing services, and managing toward a predictable and reliable result. How are you approaching the customer experience? Did you take an iterative approach based on customer feedback and/or competitive benchmarking? Or did you begin with a vision of the end to end experience and implement a solution to support it?

If you expect your customer to choose you for their next purchase, it is paramount that you deliver on your customer experience promise. Regardless of your product or industry, customers expect and demand flawless execution. eCommerce customers expect to know when their product will arrive before they even purchase it. After purchasing it, they want to know that it shipped on time, who is delivering it, its transit status, and that it arrived…all in real time. Manufacturing and commercial customers rely on you to deliver on your promises so that they can meet their own customer commitments. You fail once and you might get a second chance. You fail twice and you’re done.

Different departments could have ownership over designing customer experience. It may be Marketing because this could be a branding function at your company. It could be the Customer Service group because they are closest to managing the reality of customer experience. Or it may be your Supply Chain organization because most of the building blocks of the solution align to their charter. Regardless of the owner, a successful customer experience requires these teams to perform in concert to decide what to implement, execute the solution, and drive continuous improvement.

To be successful, you must have these core competencies:

Know what you can promise

Clearly articulate the experience that the Customer can expect

Keep your customer informed of progress of their purchase

Deal with issues as soon as they arise. Reset customer expectations if you are unable to resolve the issue behind the scenes

Looking at the list of core competencies, everything falls into one of two themes – external customer communications or internal operations – that determine what can be promised and the certitude of delivering on that promise.

If you pull back the curtain at leading supply chains, you will find a customer promise that is built on knowing that product is available, how long it will take to ship, and how long it will take to arrive at the customer’s doorstep. The building blocks of these core competencies comprise the following strategies:

Formulate your Customer Promise

You can have a customized promise to each customer based on real time supply chain insights. Alternatively, you can provide a static offer to all customers and the offer can be adjusted if there are constraints affecting delivery.

Inventory Availability – You have unreserved and unrestricted product to ship.

Logistics – Calculate time to delivery. Based on ship from/to addresses, calculate delivery commitment based on customer-determined shipping service level.

Fulfillment Center – Visibility to operational performance. If SLAs are not being met, you might need to adjust the customer promise up front rather than apologizing for a missed commitment.

Visibility to Execution

Integration across all internal and external partners designed to provide real time visibility.

Integration should tell you 1) you have a problem, 2) what problem you have, 3) the scope of the problem, and 4) what time it occurred. Visualization tools help you see the issue and if it is growing.

This is where “batch” processing becomes the enemy to customer satisfaction. For every minute you don’t know about a problem that has already started, you diminish your ability to fix it without impacting the customer.

Exceptions Management

Identify your most common problems and incorporate automated exception handlers to resolve problems before they impact your customer experience. Examples include: Automatically upgrading your shipping service level from 2-day to Overnight because the fulfillment center is a day late. Or, automatically sending a replacement shipment if the logistics provider system shows the item lost or damaged in transit.

If you cannot meet your customer commitment, reset expectations based on real time insights (same logic as formulating the original promise). Automated messaging keeps your customer informed and reduces the strain on customer service resources.

Leverage data insights from your Customer Service organization for feedback on how customers feel about your designed experience and how reliably you are meeting it. Social media analytics can offer additional visibility to what is being said about your company, your customer policies, customer experiences, and more. You should also assess your competition to understand what you’re up against. From competitive analysis, social media analytics, and customer feedback given directly to your company, you can garner a good understanding of purchasing drivers, how customers view you versus your competition, and how customer perception aligns to your designed customer experience. The more granular view you have into the systems and partners involved in the end to end experience the better you will be able to 1) compare customer perception to reality, and 2) evolve your supply chain to be more closely aligned with the reasons customers buy from you instead of your competitor.

Customer experience requires close collaboration between departments that sometimes measure success quite differently. It requires a complex technical solution to align directly to the customer experience design. The weakest link is frequently access to real time data, so systems integration becomes a prerequisite for success. Lastly, with the solution deployed, leverage every source of customer and market insight to understand if you’ve implemented the right design relative to both customer desires and the competitive landscape.

]]>http://plastergroup.com/designing-customer-experience/feed/0Lean Business Analysis http://plastergroup.com/lean-business-analysis/
http://plastergroup.com/lean-business-analysis/#respondThu, 20 Apr 2017 21:29:47 +0000http://plastergroup.com/?p=43810By Heather Smith, Business Solutions Consultant As companies try to do more with less, Lean is still growing in popularity. The Japanese manufacturing philosophy from Toyota emphasizes cutting waste and boosting efficiency. Other industries often struggle to adapt Lean to their processes. This is because Lean was developed as a system to eliminate waste in automotive manufacturing in the 1960’s. Today, Lean is evolving into a management approach to improve operational effectiveness (Kovacheva, 2010). However, because Lean methods were developed in the manufacturing industry they often need to be adjusted to fit other processes—like business analysis. Put simply, “Lean business analysis is about doing the right thing at the right time and removing things that don’t add value” (Saboe, 2016). It makes sense business analysis would use Lean to create efficiency and eliminate waste. The International Institute of Business Analysis (IIBA) – the organization responsible for defining the profession – calls business analysis “the practice of enabling change in an organizational context, by defining needs and recommending solutions that deliver value to stakeholders” (IIBA website). By definition, both Lean and business analysis involve examining and refining business behaviors and processes. In this article, I explore how Lean relates to business analysis in the technology sector. Lean identifies several types of “muda” or waste (see the table below). The column on the left lists traditional/manufacturing definitions of waste. At a glance, none of these seem applicable to business analysis or software development – the column on the right shows how those wastes are defined in a software development context. The Seven Wastes of Manufacturing The Seven Wastes of Software Development Overproduction Partially done work Inventory Extra features Extra processing Relearning Transportation Handovers Motion Task switching Waiting Delays Defects Defects Source: Poppendieck & Poppendieck, 2007. Overproduction/Partially Done Work: Overproduction and partially done work are equally wasteful. The simplest example of overproduction in a non-manufacturing environment is “gold-plating.” The Project Management Institute (PMI) is clear in its position that delivering more than what is explicitly defined in the scope statement is not good for any project. Exceeding customer expectations is great—but schedules and budgets are estimated based on planned work. There are ways to go above and beyond without doing extra work. This is where innovation and ingenuity are vital. Partially done work is another aspect of waste in business analysis. Most often this is because we forge ahead with insufficient information (Gottesdiener, 2009). In software development there is always some degree of uncertainty – Lean business analysis means delivering only what is needed, when it is needed. Some organizations want to have all requirements up front. This is not the best approach. Requirements are specific, standalone statements and should be verified as they are developed (Gottesdiener, 2005). Another common problem occurs when business analysts rely too heavily on templates or tools out of habit or protocol. Work products should be developed on an as-needed basis. A Lean methodology prioritizes only the tasks that add value. Inventory/Extra Features: Business analysts frequently encounter customers who want requirements documented in case they are needed later (“nice to have”/ “just in case”). Some customers want to document every possible permutation of user stories and requirements – even those they know they will likely never use. This can be difficult. Business analysts find and document the processes that add value. They also identify and eliminate those that don’t. Taking time to record everything leads to unused work and increases document management at both the project and enterprise levels. One way to circumvent this is to create value stream maps. They provide a clear visualization of what is needed to perform business processes. They also enable the business analyst to demonstrate what requirements and use cases are truly necessary. Extra Processing/Relearning: In some cases, waste occurs when there are too many people involved in creating or approving a product. Once I worked with a team that had a 29 step process for analyzing and approving minor changes to a software product. This quickly resulted in significant backlog and made tracking work difficult. Another common problem occurs when requirements and related business analysis tasks are done in bulk. This “relearning” can lead to ongoing reviews of previous work. This happens because it is difficult for people to remember precisely what was agreed on and why. The best way to avoid this type of waste is to develop requirements and business analysis products as close to the deadline/consumption as possible (Gottesdiener, 2009). This type of waste is closely related to Overproduction/Partially Done Work and Inventory/Extra Features. Transportation/Handovers: Depending on the number of people and teams involved in a technology project, there needs to be a strategy in place to avoid sending the same work products to different audiences for review. While it is certainly helpful to get feedback and to ensure shared understanding of outcomes, it is not beneficial to have multiple rounds of reviews with separate audiences. There are a couple of reasons for this—different groups may have different feedback/requests, the business analyst must then resolve the contradiction—meaning more meetings. This takes more time and increases costs. Motion/Task Switching: Motion is closely related to transport in its potential to create waste and delay results. Unlike transport, motion is not necessarily physical. Too many review/approval cycles or email chains that involve several people and span several days do not add business value. I have a rule: if the team cannot reach a shared understanding in 3 emails, we need to have a meeting or a phone conversation. Task switching in business analysis is typically caused by resource shortage. When organizations lack sufficient resources they often assign the same people to multiple projects. This is acceptable so long as there is careful coordination of tasks and deadlines. Task switching requires more attention to detail and work planning to ensure that one person’s workload does not have adverse downstream impacts (Gottesdiener, 2009). Waiting/Delays: None of us accomplishes anything alone—we depend on others to complete key tasks. Inevitably, those people have competing deadlines and personal lives …Read More

As companies try to do more with less, Lean is still growing in popularity. The Japanese manufacturing philosophy from Toyota emphasizes cutting waste and boosting efficiency. Other industries often struggle to adapt Lean to their processes. This is because Lean was developed as a system to eliminate waste in automotive manufacturing in the 1960’s. Today, Lean is evolving into a management approach to improve operational effectiveness (Kovacheva, 2010). However, because Lean methods were developed in the manufacturing industry they often need to be adjusted to fit other processes—like business analysis. Put simply, “Lean business analysis is about doing the right thing at the right time and removing things that don’t add value” (Saboe, 2016).

It makes sense business analysis would use Lean to create efficiency and eliminate waste. The International Institute of Business Analysis (IIBA) – the organization responsible for defining the profession – calls business analysis “the practice of enabling change in an organizational context, by defining needs and recommending solutions that deliver value to stakeholders” (IIBA website). By definition, both Lean and business analysis involve examining and refining business behaviors and processes. In this article, I explore how Lean relates to business analysis in the technology sector.

Lean identifies several types of “muda” or waste (see the table below). The column on the left lists traditional/manufacturing definitions of waste. At a glance, none of these seem applicable to business analysis or software development – the column on the right shows how those wastes are defined in a software development context.

The Seven Wastes of Manufacturing

The Seven Wastes of Software Development

Overproduction

Partially done work

Inventory

Extra features

Extra processing

Relearning

Transportation

Handovers

Motion

Task switching

Waiting

Delays

Defects

Defects

Source: Poppendieck & Poppendieck, 2007.

Overproduction/Partially Done Work:

Overproduction and partially done work are equally wasteful. The simplest example of overproduction in a non-manufacturing environment is “gold-plating.” The Project Management Institute (PMI) is clear in its position that delivering more than what is explicitly defined in the scope statement is not good for any project. Exceeding customer expectations is great—but schedules and budgets are estimated based on planned work. There are ways to go above and beyond without doing extra work. This is where innovation and ingenuity are vital.

Partially done work is another aspect of waste in business analysis. Most often this is because we forge ahead with insufficient information (Gottesdiener, 2009). In software development there is always some degree of uncertainty – Lean business analysis means delivering only what is needed, when it is needed. Some organizations want to have all requirements up front. This is not the best approach. Requirements are specific, standalone statements and should be verified as they are developed (Gottesdiener, 2005). Another common problem occurs when business analysts rely too heavily on templates or tools out of habit or protocol. Work products should be developed on an as-needed basis. A Lean methodology prioritizes only the tasks that add value.

Inventory/Extra Features:

Business analysts frequently encounter customers who want requirements documented in case they are needed later (“nice to have”/ “just in case”). Some customers want to document every possible permutation of user stories and requirements – even those they know they will likely never use. This can be difficult. Business analysts find and document the processes that add value. They also identify and eliminate those that don’t. Taking time to record everything leads to unused work and increases document management at both the project and enterprise levels.

One way to circumvent this is to create value stream maps. They provide a clear visualization of what is needed to perform business processes. They also enable the business analyst to demonstrate what requirements and use cases are truly necessary.

Extra Processing/Relearning:

In some cases, waste occurs when there are too many people involved in creating or approving a product. Once I worked with a team that had a 29 step process for analyzing and approving minor changes to a software product. This quickly resulted in significant backlog and made tracking work difficult.

Another common problem occurs when requirements and related business analysis tasks are done in bulk. This “relearning” can lead to ongoing reviews of previous work. This happens because it is difficult for people to remember precisely what was agreed on and why. The best way to avoid this type of waste is to develop requirements and business analysis products as close to the deadline/consumption as possible (Gottesdiener, 2009). This type of waste is closely related to Overproduction/Partially Done Work and Inventory/Extra Features.

Transportation/Handovers:

Depending on the number of people and teams involved in a technology project, there needs to be a strategy in place to avoid sending the same work products to different audiences for review. While it is certainly helpful to get feedback and to ensure shared understanding of outcomes, it is not beneficial to have multiple rounds of reviews with separate audiences. There are a couple of reasons for this—different groups may have different feedback/requests, the business analyst must then resolve the contradiction—meaning more meetings. This takes more time and increases costs.

Motion/Task Switching:

Motion is closely related to transport in its potential to create waste and delay results. Unlike transport, motion is not necessarily physical. Too many review/approval cycles or email chains that involve several people and span several days do not add business value. I have a rule: if the team cannot reach a shared understanding in 3 emails, we need to have a meeting or a phone conversation.

Task switching in business analysis is typically caused by resource shortage. When organizations lack sufficient resources they often assign the same people to multiple projects. This is acceptable so long as there is careful coordination of tasks and deadlines. Task switching requires more attention to detail and work planning to ensure that one person’s workload does not have adverse downstream impacts (Gottesdiener, 2009).

Waiting/Delays:

None of us accomplishes anything alone—we depend on others to complete key tasks. Inevitably, those people have competing deadlines and personal lives to contend with. It is impossible to completely eliminate waiting; however, it is absolutely possible to identify known scheduling conflicts and plan activities around them. For example, if you know in advance key teams or resources are unavailable, you can plan to work on other tasks or with other teams. “One, plan your approach for the initiative and two, conduct a retrospective to learn and adapt for future initiatives. There should be a consistent start and a consistent end. Everything in between should be flexible” (Kupersmith, 2011). By breaking down tasks, you can deliver more quickly, proactively address resource constraints, and improve processes.

Defects:

Of all wastes, defects require the least explanation. No one wants to create a faulty product. The sooner a defect is detected the more easily (and cheaply) it can be fixed. This suggests an approach where inspection prevents defects. Creating smaller user stories and packages of requirements on an as-needed basis and building mockups/draft prototypes makes it easier to avoid defects (Galen, 2013).

Conclusion:

Ultimately, Lean business analysis means the ability to quickly and neatly decide which tools to apply, what deliverables matter, and to engage in activities that add value to your client, project, or organization. There is not a clear path – practicing Lean as a business analyst requires years of experience. It requires trial and error. What works well for one team or project, cannot necessarily guarantee later success.

IIBA defines specific business analysis tasks alongside specific techniques used to accomplish each task. While there is value in defining tasks and relating them to appropriate tools, this information should not be taken as prescriptive. Not all tasks add value to all projects. Seasoned business analysts are able to quickly assess the situation, client, or project needs and craft a suitable strategy. Less experienced analysts are more likely to simply accept process and templates without considering whether they add value or provide the most expedient return on investment. Templates and techniques are tools and business analysts who know how and when to apply those tools will create more value by doing only what is needed to get the job done.

There is no single solution.

Wondering how your company could benefit from application of Lean Business Analysis? Contact one of our consultants today!

]]>http://plastergroup.com/lean-business-analysis/feed/0February 2017 Beyond Agilehttp://plastergroup.com/february-2017-beyond-agile/
http://plastergroup.com/february-2017-beyond-agile/#respondThu, 23 Feb 2017 22:40:28 +0000http://plastergroup.com/?p=43796Plaster Group was a proud sponsor of yesterday’s BeyondAgile event. This month’s meetup was at Getty Images and was presented by the gregarious Marius Grigoriu, Development Manager at Nordstrom. Grigoriu covered the very timely topic of “the trouble with DevOps and what to do about it.” This talked explored the dynamics of DevOps through several lenses, from strategic and financial, all the way to the engineering experience. The presentation c losed with an exciting demo of a proposed solution in a live, product environment. Join BeyondAgile on Meetup here to keep current on upcoming events!

Plaster Group was a proud sponsor of yesterday’s BeyondAgile event. This month’s meetup was at Getty Images and was presented by the gregarious Marius Grigoriu, Development Manager at Nordstrom. Grigoriu covered the very timely topic of “the trouble with DevOps and what to do about it.” This talked explored the dynamics of DevOps through several lenses, from strategic and financial, all the way to the engineering experience. The presentation c losed with an exciting demo of a proposed solution in a live, product environment.

]]>http://plastergroup.com/february-2017-beyond-agile/feed/0Ambient Computing and the Internet of Thingshttp://plastergroup.com/ambient-computing-internet-things/
http://plastergroup.com/ambient-computing-internet-things/#respondMon, 12 Dec 2016 22:20:24 +0000http://plastergroup.com/?p=43747by Sal Faizi, senior consultant Ambient computing is becoming increasingly important as it allows organizations to stay competitive in innovative ways. Gartner predicts that “by 2020, the installed base of IoT devices will exceed 26 billion units worldwide.” Ambient computing uses an ecosystem of Internet-connected “Things” or IoT. The Things are sensors, machines, smart devices etc. that signal a change and act as triggers. Such an ecosystem presents revolutionary opportunities in many areas of our daily lives. We will discuss examples from the medical industry and the care of the elderly as well as another example of how supply chain operations can be transformed (see Process Orchestration below). First, let’s consider an example of elderly care. One third of Americans aged 65+ experience a fall each year. These falls are a leading cause of fatal and nonfatal injuries, and time spent immobile after a fall often affects seniors’ health outcomes as muscle cells start to break down in as little as 30 minutes after falling. About half of older adults who fall cannot get up on their own; therefore, it is crucial for help to arrive as soon as possible. To further complicate this matter, the proportion of the population over the age of 65 will increase from 12.7% in 2000 to 20.3% in 2050 in the US, and to 30% in Europe. The best way to deal with this growing trend is to take advantage of technology such as ambient computing. A medical-grade human pose detector akin to the Xbox Kinect can detect a fall and also whether the patient has taken medication, eaten his or her meal, etc. A variety of physical activities could be recorded and acted upon allowing fewer caregivers to provide care to a growing population. Ultimately, it is not just activity detection that is important, but rather acting upon these alerts. This is where the true power of ambient computing lies. In order to maximize value from ambient computing, organizations must move their focus beyond physical devices to analysis of the data collected from them. To maximize success, a framework is necessary for ambient computing to garner real-time, actionable insights from IoT. By addressing all major areas of ambient computing, an organization can maximize the impact of ambient computing to their business. The framework consists of the following stages: Each stage has a specific goal. Collection: Collecting and integrating information from a variety of Things or devices from a variety of manufacturers with standard or proprietary data exchange Analytics: Performing analytics to determine data trends and device behavior alone and in conjunction with other devices to detect and predicting impact Process Orchestration: Applying an end-to-end process orchestration to determine process control to respond to the impact in real-time. Often this allows detecting and correcting trends before they materialize into issues Security: Applying information security discipline to keep the system safe and to prevent tampering Intelligence can be brought to almost any scenario with recent advances in smart devices, sensors, and ubiquitous connectivity. The possibilities of ambient computing are seemingly endless with the users at the center. Collection Sensor and IoT data is rich in variety, volume, or velocity. Variety comes from the many protocols and formats used by IoT devices. All this data must be collated and consumed in an integrated fashion. Data arriving from IoT tends to be voluminous. Consider a smart thermostat that transmits temperature data every 30 seconds. There are several thermostats on each floor of a building, times the number of buildings an organization has. Data arrives not only in high volume but also at high velocity. Data with variety, velocity, and volume is a natural fit for big data. Analytics Once data is collected, transformed and integrated comes the important step of analytics. By applying analytics, important insights can be harvested. For example, the treatment path a patient is expected take after arriving at a hospital, or projecting and preparing to meet the power demand from a customer base whose houses are equipped with smart electric power meters. Process Orchestration The next stage of ambient computing involves orchestrating the process based on insights and signals obtained from the data through analytics. Some process orchestration could be performed with human interactions, while some could be performed automatically by ambient computing. For example, supply chain operations could make leaps in progress by integrating consumer sentiment on social media, current inventory levels, consumer purchasing habits, and analyzing this information to better match supply with demand. Visualization is an extremely important element of process orchestration via human interaction. A smartly designed dashboard that presents information in an easy to understand format, puts clearer optics on the business. Security The more Things there are in an IoT network, the more pathways there will be with each pathway presenting a risk of being compromised. Thus, security is an important consideration for keeping the network safe and away from intruders. Consider how data access is allowed for each individual Thing, how securely data is stored, how data is transported, and how the access is traced etc. Summary Ambient computing scenarios involve not just connectivity and interoperability, but also advanced levels of orchestration and analytics. It involves sophisticated but simple user experiences. Ambient computing opens new possibilities for organization. However, a successful implementation requires an approach based on principles of Agile, Business Intelligence, Project Management and experience in dealing with ambiguity. Such an approach can mean the difference between success and failure of a new and critical technology like ambient computing. Have a question about ambient computing or data visualization? Contact us today!

Ambient computing is becoming increasingly important as it allows organizations to stay competitive in innovative ways. Gartner predicts that “by 2020, the installed base of IoT devices will exceed 26 billion units worldwide.” Ambient computing uses an ecosystem of Internet-connected “Things” or IoT. The Things are sensors, machines, smart devices etc. that signal a change and act as triggers.

Such an ecosystem presents revolutionary opportunities in many areas of our daily lives. We will discuss examples from the medical industry and the care of the elderly as well as another example of how supply chain operations can be transformed (see Process Orchestration below). First, let’s consider an example of elderly care.

One third of Americans aged 65+ experience a fall each year. These falls are a leading cause of fatal and nonfatal injuries, and time spent immobile after a fall often affects seniors’ health outcomes as muscle cells start to break down in as little as 30 minutes after falling. About half of older adults who fall cannot get up on their own; therefore, it is crucial for help to arrive as soon as possible. To further complicate this matter, the proportion of the population over the age of 65 will increase from 12.7% in 2000 to 20.3% in 2050 in the US, and to 30% in Europe. The best way to deal with this growing trend is to take advantage of technology such as ambient computing. A medical-grade human pose detector akin to the Xbox Kinect can detect a fall and also whether the patient has taken medication, eaten his or her meal, etc. A variety of physical activities could be recorded and acted upon allowing fewer caregivers to provide care to a growing population.

Ultimately, it is not just activity detection that is important, but rather acting upon these alerts. This is where the true power of ambient computing lies. In order to maximize value from ambient computing, organizations must move their focus beyond physical devices to analysis of the data collected from them. To maximize success, a framework is necessary for ambient computing to garner real-time, actionable insights from IoT.

By addressing all major areas of ambient computing, an organization can maximize the impact of ambient computing to their business. The framework consists of the following stages:

Each stage has a specific goal.

Collection: Collecting and integrating information from a variety of Things or devices from a variety of manufacturers with standard or proprietary data exchange

Analytics: Performing analytics to determine data trends and device behavior alone and in conjunction with other devices to detect and predicting impact

Process Orchestration: Applying an end-to-end process orchestration to determine process control to respond to the impact in real-time. Often this allows detecting and correcting trends before they materialize into issues

Security: Applying information security discipline to keep the system safe and to prevent tampering

Intelligence can be brought to almost any scenario with recent advances in smart devices, sensors, and ubiquitous connectivity. The possibilities of ambient computing are seemingly endless with the users at the center.

Collection

Sensor and IoT data is rich in variety, volume, or velocity. Variety comes from the many protocols and formats used by IoT devices. All this data must be collated and consumed in an integrated fashion. Data arriving from IoT tends to be voluminous. Consider a smart thermostat that transmits temperature data every 30 seconds. There are several thermostats on each floor of a building, times the number of buildings an organization has. Data arrives not only in high volume but also at high velocity. Data with variety, velocity, and volume is a natural fit for big data.

Analytics

Once data is collected, transformed and integrated comes the important step of analytics. By applying analytics, important insights can be harvested. For example, the treatment path a patient is expected take after arriving at a hospital, or projecting and preparing to meet the power demand from a customer base whose houses are equipped with smart electric power meters.

Process Orchestration

The next stage of ambient computing involves orchestrating the process based on insights and signals obtained from the data through analytics. Some process orchestration could be performed with human interactions, while some could be performed automatically by ambient computing. For example, supply chain operations could make leaps in progress by integrating consumer sentiment on social media, current inventory levels, consumer purchasing habits, and analyzing this information to better match supply with demand.

Visualization is an extremely important element of process orchestration via human interaction. A smartly designed dashboard that presents information in an easy to understand format, puts clearer optics on the business.

Security

The more Things there are in an IoT network, the more pathways there will be with each pathway presenting a risk of being compromised. Thus, security is an important consideration for keeping the network safe and away from intruders. Consider how data access is allowed for each individual Thing, how securely data is stored, how data is transported, and how the access is traced etc.

Summary

Ambient computing scenarios involve not just connectivity and interoperability, but also advanced levels of orchestration and analytics. It involves sophisticated but simple user experiences. Ambient computing opens new possibilities for organization. However, a successful implementation requires an approach based on principles of Agile, Business Intelligence, Project Management and experience in dealing with ambiguity. Such an approach can mean the difference between success and failure of a new and critical technology like ambient computing.

Have a question about ambient computing or data visualization? Contact us today!

]]>http://plastergroup.com/ambient-computing-internet-things/feed/0Fourth Annual Rise Up Luncheonhttp://plastergroup.com/fourth-annual-rise-luncheon/
http://plastergroup.com/fourth-annual-rise-luncheon/#commentsMon, 07 Nov 2016 23:35:43 +0000http://plastergroup.com/?p=43723Plaster Group was once again honored to sponsor and attend the fourth annual Rise Up Luncheon for ROOTS Young Adult Shelter. It has been wonderful watching this event grow over the past four years and receive updates on the powerful impact that ROOTS has within our community. At its incorporation in 2000, ROOTS became he first overnight shelter in the city specifically designed to meet the needs of homeless young adults. Since then, it has continued to increase the breadth of shelter and services it provides to Seattle’s homeless youth. Friday’s luncheon celebrated ROOTS volunteers that keep the shelter running. We heard from volunteer speakers who discussed the impact that their involvement with ROOTShas had upon them. Seattle’s mayor Ed Murray made an impromptu appearance to discuss the hope that drives ongoing efforts to overcome Seattle’s homelessness. We also heard compelling speeches from Toby Cittenden, Executive Director of Washington Bus and from Kristine Scott, Executive Director of ROOTS.

]]>Plaster Group was once again honored to sponsor and attend the fourth annual Rise Up Luncheon for ROOTS Young Adult Shelter. It has been wonderful watching this event grow over the past four years and receive updates on the powerful impact that ROOTS has within our community. At its incorporation in 2000, ROOTS became he first overnight shelter in the city specifically designed to meet the needs of homeless young adults. Since then, it has continued to increase the breadth of shelter and services it provides to Seattle’s homeless youth.

Friday’s luncheon celebrated ROOTS volunteers that keep the shelter running. We heard from volunteer speakers who discussed the impact that their involvement with ROOTShas had upon them. Seattle’s mayor Ed Murray made an impromptu appearance to discuss the hope that drives ongoing efforts to overcome Seattle’s homelessness. We also heard compelling speeches from Toby Cittenden, Executive Director of Washington Bus and from Kristine Scott, Executive Director of ROOTS.