Menu

Tag Archives: Subway Fold

Throughout grades 1 through 6 at Public School 79 in Queens, New York, the teachers had one universal command they relied upon to try to quickly gather and organize the students in each class during various activities. They would announce “Single file, everyone”, and expect us all to form a straight line with one student after the other all pointed in the same direction. They would usually deploy this to move us in an orderly fashion to and from the lunchroom, schoolyard, gym and auditorium. Not that this always worked as several requests were usually required to get us all to quiet down and line up.

Just as it was used back then as a means to bring order to a room full of energetic grade-schoolers, those three magic words can now be re-contextualized and re-purposed for today’s digital everything world when applied to a new means of bringing more control and safety to our personal data. This emerging mechanism is called the universal digital profile (UDP). It involves the creation of a dedicated file to compile and port an individual user’s personal data, content and usage preferences from one online service to another.

This is being done in an effort to provide enhanced protection to consumers and their digital data at a critical time when there have been so many online security breaches of major systems that were supposedly safe. More importantly, these devastating hacks during the past several years have resulted in the massive betrayals of users’ trust that need to be restored.

Clearly and concisely setting the stage for the development of UDPs was an informative article on TechCrunch.com entitled The Birth of the Universal Digital Profile, by Rand Hindi, posted on May 22, 2018. I suggest reading it in its entirety. I will summarize and annotate it, and then pose some of my own questions about these, well, pro-files.

Image from Pixabay

The Need Arises

It is axiomatic today that there is more concern over online privacy among Europeans than other populations elsewhere. This is due, in part, to the frequency and depth of the above mentioned deliberate data thefts. These incidents and other policy considerations led to the May 25, 2018 enactment and implementation of the General Data Protection Regulation (GDPR) across the EU.

Among its many requirements, the GDPR ensures that all individuals have the right to personal data portability, whereby the users of any online services can request from these sites that their personal data can be “transferred to another provider, without hindrance”. This must be done in a file format the receiving provider requires. For example, if a user is changing from one social network to another, all of his or her personal data is to be transferred to the new social network in a workable file format.

The exact definition of “personal profile” is still open to question. The net effect of this provision is that one’s “online identity will soon be transferable” to numerous other providers. As such transfer requests increase, corporate owners of such providers will likely “want to minimize” their means of compliance. The establishment of standardized data formats and application programming interfaces (APIs) enabling this process would be a means to accomplish this.²

Aurora Borealis, Image by Beverly

A Potential Solution

It will soon become evident to consumers that their digital profiles can become durable, reusable and, hence, universal for other online destinations. They will view their digital profiles “as a shared resource” for similar situations. For instance, if a user has uploaded his or her profile to a site for verification, in turn, he or she should be able to re-use such a “verified profile elsewhere”.³

This would be similar to the Facebook Connect’s functionality but with one key distinction: Facebook would retain no discretion at all over where the digital profile goes and who can access it following its transfer. That control would remain entirely with the profile’s owner.

As the UDP enters the “mainstream” usage, it may well give rise to “an entire new digital economy”. This might include new services such as “personal data clouds to personal identity aggregators or data monetization platforms”. In effect, increased interoperability between and among sites and services for UDPs might enable these potential business opportunities to take root and then scale up.

Digital profiles, especially now for Europeans, is one of the critical “impacts of the GDPR” on their online lives and freedom. Perhaps its objectives will spread to other nations.

My Questions

Can the UDP’s usage be expanded elsewhere without the need for enacting GDPR-like regulation? That is, for economic, public relations and technological reasons, might online services support UDPs on their own initiatives rather than waiting for more governments to impose such requirements?

What additional data points and functional capabilities would enhance the usefulness, propagation and extensibility of UDPs?

What other business and entrepreneurial opportunities might emerge from the potential web-wide spread of a GDPR and/or UDP-based model?

Are there any other Public School 79 graduates out there reading this?

On a very cold night in New York on December 20, 2017, I had an opportunity to attend a fascinating presentation by Dr. Irene Ng before the Data Scientists group from Meetup.com about an inventive alternative for dispensing one’s personal digital data called the Hub of All Things (HAT). [Clickable also @hubofallthings.] In its simplest terms, this involves the provision of a form of virtual container (the “HAT” situated on a “micro-server”), storing an individual’s personal data. This system enables the user to have much more control over whom, and to what degree, they choose to allow access to their data by any online services, vendors or sites. For the details on the origin, approach and technology of the HAT, I highly recommend a click-through to a very enlightening new article on Medium.com entitled What is the HAT?, by Jonathan Holtby, posted yesterday on June 6, 2018.

1. This week’s news bring yet another potential scandal for Facebook following reports that they shared extensive amounts of personal user data with mobile device vendors, including Huawei, a Chinese company that has been reported to have ties with China’s government and military. Here is some of the lead coverage so far from this week’s editions of The News York Times:

Yesterday, on May 30, 2018, at the 2018 Code Conference being held this week in Rancho Palos Verdes, California, Mary Meeker, a world-renowned Internet expert and partner in the venture capital firm Kleiner Perkins, presented her seventeenth annual in-depth and highly analytical presentation on current Internet trends. It is an absolutely remarkable accomplishment that is highly respected throughout the global technology industry and economy. The video of her speech is available here on Recode.com.

That is just the tip of the tip of the iceberg in this 294-slide deck.

Ms. Meeker’s assessments and predictions here form an extraordinarily comprehensive and insightful piece of work. There is much here for anyone and everyone to learn and consider in the current and trending states nearly anything and everything online. Moreover, there are likely many potential opportunities for new and established businesses, as well as other institutions, within this file.

I very highly recommend that you set aside some time to thoroughly read through and fully immerse your thoughts in Ms. Meeker’s entire presentation. You will be richly rewarded with knowledge and insight that can potentially yield a world of informative, strategic and practical dividends.

September 15, 2018 Update: Mary Meeker has left Kleiner Perkins to start her own investment firm. The details of this are reported in an article in the New York Times entitled Mary Meeker, ‘Queen of the Internet,’ Is Leaving Kleiner Perkins to Start a New Fund, by Erin Griffith, posted on September 14, 2018. I wish her the great success for her new venture. I also hope that she will still have enough time that she can continue to publish her brilliant annual reports on Internet trends.

While one of The Who’s first hit singles, I Can See for Miles, was most certainly not about data visualization, it still might – – on a bit of a stretch – – find a fitting a new context in describing one of the latest dazzling new technologies in the opening stanza’s declaration “there’s magic in my eye”. In determining Who’s who and what’s what about all this, let’s have a look at report on a new tool enabling data scientists to indeed “see for miles and miles” in an exciting new manner.

This innovative approach was recently the subject of a fascinating article by an augmented reality (AR) designer named Benjamin Resnick about his team’s work at IBM on a project called Immersive Insights, entitled Visualizing High Dimensional Data In Augmented Reality, posted on July 3, 2017 on Medium.com. (Also embedded is a very cool video of a demo of this system.) They are applying AR’s rapidly advancing technology1 to display, interpret and leverage insights gained from business data. I highly recommend reading this in its entirety. I will summarize and annotate it here and then pose a few real-world questions of my own.

Immersive Insights into Where the Data-Points Point

As Resnick foresees such a system in several years, a user will start his or her workday by donning their AR glasses and viewing a “sea of gently glowing, colored orbs”, each of which visually displays their business’s big data sets2. The user will be able to “reach out select that data” which, in turn, will generate additional details on a nearby monitor. Thus, the user can efficiently track their data in an “aesthetically pleasing” and practical display.

The project team’s key objective is to provide a means to visualize and sum up the key “relationships in the data”. In the short-term, the team is aiming Immersive Insights towards data scientists who are facile coders, enabling them to visualize, using AR’s capabilities upon time series, geographical and networked data. For their long-term goals, they are planning to expand the range of Immersive Insight’s applicability to the work of business analysts.

For example, Instacart, a same-day food delivery service, maintains an open source data set on food purchases (accessible here). Every consumer represents a data-point wherein they can be expressed as a “list of purchased products” from among 50,000 possible items.

How can this sizable pool of data be better understood and the deeper relationships within it be extracted and understood? Traditionally, data scientists create a “matrix of 2D scatter plots” in their efforts to intuit connections in the information’s attributes. However, for those sets with many attributes, this methodology does not scale well.

Consequently, Resnick’s team has been using their own new approach to:

Lower complex data to just three dimensions in order to sum up key relationships

Visualize the data by applying their Immersive Insights application, and

“Iteratively label and color-code the data” in conjunction with an “evolving understanding” of its inner workings

Their results have enable them to “validate hypotheses more quickly” and establish a sense about the relationships within the data sets. As well, their system was built to permit users to employ a number of versatile data analysis programming languages.

The types of data sets being used here are likewise deployed in training machine learning systems3. As a result, the potential exists for these three technologies to become complementary and mutually supportive in identifying and understanding relationships within the data as well as deriving any “black box predictive models”.4

Analyzing the Instacart Data Set: Food for Thought

Passing over the more technical details provided on the creation of team’s demo in the video (linked above), and next turning to the results of the visualizations, their findings included:

A great deal of the variance in Instacart’s customers’ “purchasing patterns” was between those who bought “premium items” and those who chose less expensive “versions of similar items”. In turn, this difference has “meaningful implications” in the company’s “marketing, promotion and recommendation strategies”.

Among all food categories, produce was clearly the leader. Nearly all customers buy it.

When the users were categorized by the “most common department” they patronized, they were “not linearly separable”. This is, in terms of purchasing patterns, this “categorization” missed most of the variance in the system’s three main components (described above).

Resnick concludes that the three cornerstone technologies of Immersive Insights – – big data, augmented reality and machine learning – – are individually and in complementary combinations “disruptive” and, as such, will affect the “future of business and society”.

Questions

Can this system be used on a real-time basis? Can it be configured to handle changing data sets in volatile business markets where there are significant changes within short time periods that may affect time-sensitive decisions?

Would web metrics be a worthwhile application, perhaps as an add-on module to a service such as Google Analytics?

Is Immersive Insights limited only to business data or can it be adapted to less commercial or non-profit ventures to gain insights into processes that might affect high-level decision-making?

Is this system extensible enough so that it will likely end up finding unintended and productive uses that its designers and engineers never could have anticipated? For example, might it be helpful to juries in cases involving technically or financially complex matters such as intellectual property or antitrust?

2. See the Subway Fold category of Big Data and Analytics for other posts covering a range of applications in this field.

3. See the Subway Fold category of Smart Systems for other posts on developments in artificial intelligence, machine learning and expert systems.

4. For a highly informative and insightful examination of this phenomenon where data scientists on occasion are not exactly sure about how AI and machine learning systems produce their results, I suggest a click-through and reading of The Dark Secret at the Heart of AI, by Will Knight, which was published in the May/June 2017 issue of MIT Technology Review.

Fast forward thirteen years to a recent article entitled Exoskin: A Programmable Hybrid Shape-Changing Material, by Evan Ackerman, posted on IEEE Spectrum on June 3, 2016. This is about an all-new and entirely different development, quite separate from quantum dots, but nonetheless a current variation on the concept that matter can be programmed for new applications. While we always think of programming as involving systems and software, this new story takes and literally stretches this long-established process into some entirely new directions.

I highly recommend reading this most interesting report in its entirety and viewing the two short video demos embedded within it. I will summarize and annotate it, and then pose several questions of my own on this, well, matter. I also think it fits in well with these 10 Subway Fold posts on other recent developments in material science including, among others, such way cool stuff as Q-Carbon, self-healing concrete and metamaterials.

Matter of Fact

The science of programmable matter is still in its formative stages. The Tangible Media Group at MIT Media Lab is currently working on this challenge included in its scores of imaginative projects. A student pursuing his Master’s Degree in this group is Basheer Tome. Among his current research projects, he is working on a type of programmable material he calls “Exoskin” which he describes as “membrane-backed rigid material”. It is composed of “tessellated triangles of firm silicone mounted on top of a stack of flexible silicone bladders”. By inflating these bladders in specific ways, Exoskin can change its shape in reaction to the user’s touch. This activity can, in turn, be used to relay information and “change functionality”.

Although this might sound a bit abstract, the two accompanying videos make the Exoskin’s operations quite clear. For example, it can be applied to a steering wheel which, through “tactile feedback”, can inform the driver about direction-finding using GPS navigation and other relevant driving data. This is intended to lower driver distractions and “simplify previously complex multitasking” behind the wheel.

The Exoskin, in part, by its very nature makes use of haptics (using touch as a form of interface). One of the advantages of this approach is that it enables “fast reflexive motor responses to stimuli”. Moreover, the Exoskin actually involves inputs that “are both highly tactily perceptible and visually interpretable”.

Fabrication Issues

A gap still exists between the current prototype and a commercially viable product in the future in terms of the user’s degree of “granular control” over the Exoskin. The number of “bladders” underneath the rigid top materials will play a key role in this. Under existing fabrication methods, multiple bladders in certain configurations are “not practical” at this time.

However, this restriction might be changing. Soon it may be possible to produce bladders for each “individual Exoskin element” rather than a single bladder for all of them. (Again, the videos present this.) This would involve a system of “reversible electrolysis” that alternatively separates water into hydrogen and oxygen and then back again into water. Other options to solve this fabrication issue are also under consideration.

Mt. Tome hopes this line of research disrupts the distinction between what is “rigid and soft” as well as “animate and inanimate” to inspire Human-Computer Interaction researchers at MIT to create “more interfaces using physical materials”.

My Questions

In what other fields might this technology find viable applications? What about medicine, architecture, education and online gaming just to begin?

Might Exoskin present new opportunities to enhance users’ experience with the current and future releases virtual reality and augmented reality systems? (These 15 Subway Fold posts cover a sampling of trends and developments in VR and AR.)

How might such an Exoskin-embedded steering wheel possibly improve drivers’ and riders’ experiences with Uber and other ride-sharing services?

Way before the advent of email, when people exclusively wrote letters on paper and mailed them to each other (yes, this really did happen once upon a time), there was a long-running scam known as the “chain letter“. Recipients who received such a letter were asked, often through manipulative language, to copy it and send it on to as many other people as possible. In effect, these were structured as fraudulent pyramid schemes that ultimately would collapse in on themselves.

Sometimes chain letters involved illegal financial dealings and other hoaxes, also producing unwanted emotional effects on who mistakenly fell for them. Variations of the chain letter still survive today online and operate using email, texting and social media.

However, an emerging new form of virtual chain, in conjunction with the mail service, might soon appear – – namely using the blockchain – – within the U.S. Postal Service (USPS). However, this combination could potentially produce four very positive improvements in services. These exciting prospects were the subject of a most interesting new post on Quartz.com on May 24, 2016 entitled Even the US Postal Service Wants to Start Using Blockchain Tech, by Ian Kar. I recommend reading this article in its entirety. I will summarize and annotate it, and pose some questions of my own (but without any additional postage due).

While blockchain technology has been getting a great deal of press coverage recently involving innovative new development initiatives in, among other fields, finance, law, government and the arts, this story illustrates how it also might affect something as routine and mundane as mail service with possibly dramatic results. Such changes could produce significant economic and logistical advances that would affect just about anyone who checks their real world mailbox every day.

Traditionally, the USPS has never really distinguished itself as a leader in innovation. Rather, it has a long reputation for its inefficient operations. This could possibly be significantly changed by this series of a series of blockchain proposals. Because this technology is decentralized, widely accessible, and secured by encryption, it is highly resistant to tampering.

1. Financial Services: US post offices currently offers a limited number of financial services such as international money transfers. The IOG report speculated that the USPS “could benefit from developing its own bitcoin-like digital currency”. Perhaps it could be called “Postcoin”. This would permit the expansion into other financial services such as a “global payment service” for people without traditional bank accounts.

2. Identity: An individual’s identity could be verified for the USPS using a blockchain. Essentially, they already do this when they deliver your mail to you each day. By using a blockchain for this, the USPS could provide you with assistance to help you manage both your online and offline identities “by storing it on an immutable ledger”.

3. Logistics Support: Applying the blochchain to support the Internet of Things (IoT) could enhance the USPS logistics management operations. The IGO report imagine a system where “vehicles and sorting equipment could manage their own tracking, monitoring, and maintenance”. This could include items such as autonomously, efficiently and economically monitoring brake pad performance including:

4. Mail Tracking: On a daily basis, the USPS delivers 509 million pieces of mail. As stated in the OIG report, the blockchain can be deployed to uniquely identify each piece of it. This could be done with “a small sensor” on each piece in order to use the blockchain to “manage the chain of custody between different USPS partners, like UPS and Fedex”. As well, the blockchain could be put to the additional uses of:

Expediting customs clearance

Integrating payments

Shipping upon one unified platform

[All of these components form the very convenient anagram FILM, thus making it easier to, well, picture.]

For now, the USPS intends to keep studying blockchain technology. The OIG report states that the agency “could benefit from experimenting” with it on new financial products and then eventually progress on toward “more complex uses”.

“Stamped Mail to be Posted”, Image by Steven Depolo

My Questions

Would these blochchain apps have a negative impact on USPS revenues as this massive government agency has been running at a budget deficit for many years? If so, would this have unintended negative consequences for consumers and/or the USPS?

Conversely, can the USPS use blockchain innovations to create new sources of revenue and employment? What new sorts of job descriptions and titles might emerge?

Would the blockchain do away with the traditional services of certified, registered, priority and insured mail? If so, what forms of proof of delivery or non-delivery could be provided to consumers?

Would any of these proposed new apps possibly create new privacy issues for consumers and policy concerns for the US government?

What type of opportunities might arise for entrepreneurs to create new mail apps built on the blockchain?

The subject matter of this test is the professional ethical roles and responsibilities a lawyer must abide by as an advocate and counselor to clients, courts and the legal profession. It is founded upon a series of ethical considerations and disciplinary rules that are strictly enforced by the bars of each state. Violations can potentially lead to a series of professional sanctions and, in severe cases depending upon the facts, disbarment from practice for a term of years or even permanently.

In other professions including, among others, medicine and accounting, similar codes of ethics exist and are expected to be scrupulously followed. They are defined efforts to ensure honesty, quality, transparency and integrity in their industries’ dealings with the public, and to address certain defined breaches. Many professional trade organizations also have formal codes of ethics but often do not have much, if any, sanction authority.

Should some comparable forms of guidelines and boards likewise be put into place to oversee the work of big data researchers? This was the subject of a very compelling article posted on Wired.com on May 20, 2016, entitled Scientists Are Just as Confused About the Ethics of Big-Data Research as You by Sharon Zhang. I highly recommend reading it in its entirety. I will summarize, annotate and add some further context to this, as well as pose a few questions of my own.

Two Recent Data Research Incidents

Last month. an independent researcher released, without permission, the profiles with very personal information of 70,000 users of the online dating site OKCupid. These users were quite angered by this. OKCupid is pursuing a legal claim to remove this data.

Earlier in 2014, researchers at Facebook manipulated items in users’ News Feeds for a study on “mood contagion“.¹ Many users were likewise upset when they found out. The journal that published this study released an “expression of concern”.

Users’ reactions over such incidents can have an effect upon subsequent “ethical boundaries”.

Nonetheless, the researchers involved in both of these cases had “never anticipated” the significant negative responses to their work. The OKCupid study was not scrutinized by any “ethical review process”, while a review board at Cornell had concluded that the Facebook study did not require a full review because the Cornell researchers only had a limited role in it.

Both of these incidents illustrate how “untested the ethics” are of these big data research. Only now are the review boards that oversee the work of these researchers starting to pay attention to emerging ethical concerns. This is in high contrast to the controls and guidelines upon medical research in clinical trials.

The Applicability of The Common Rule and Institutional Research Boards

In the US, under the The Common Rule, which governs ethics for federally funded biomedical and behavioral research where humans are involved, studies are required to undergo an ethical review. However, such review does not apply a “unified system”, but rather, each university maintains its own institutional review board (IRB). These are composed of other (mostly medical) researchers at each university. Only a few of them “are professional ethicists“.

To a lesser extent, do they have experience in computer technology. This deficit may be affecting the protection of subjects who participate in data science research projects. In the US, there are hundreds of IRBs but they are each dealing with “research efforts in the digital age” in their own ways.

Both the Common Rule and the IRB system came into being following the revelation in the 1970s that the U.S. Public Health Service had, between 1932 and 1972, engaged in a terrible and shameful secret program that came to be known as the Tuskegee Syphilis Experiment. This involved leaving African Americans living in rural Alabama with untreated syphilis in order to study the disease. As a result of this outrage, the US Department of Health and Human Services created new regulations concerning any research on human subjects they conducted. All other federal agencies likewise adopted such regulations. Currently, “any institution that gets federal funding has to set up an IRB to oversee research involving humans”.

However, many social scientists today believe these regulations are not accurate or appropriate for their types of research involving areas where the risks involved “are usually more subtle than life or death”. For example, if you are seeking volunteers to take a survey on test-taking behaviors, the IRB language requirements on physical risks does not fit the needs of the participants in such a study.

This does not, however, imply that all social science research, including big data studies, are entirely risk-free.

Ethical Issues and Risk Analyses When Data Sources Are Comingled

Dr. Elizabeth A. Buchanan who works as an ethicist at the University of Wisconsin-Stout, believes that the Internet is now entering its “third phase” where researchers can, for example, purchase several years’ worth of Twitter data and then integrate it “with other publicly available data”.² This mixture results in issues involving “ethics and privacy”.

Recently, while serving on an IRB, she took part in evaluated a project proposal involving merging mentions of a drug by its street name appearing on social media with public crime data. As a result, people involved in crimes could potentially become identified. The IRB still gave its approval. According to Dr. Buchanan, the social value of this undertaking must be weighed against its risk. As well, the risk should be minimized by removing any possible “idenifiers” in any public release of this information.

As technology continues to advance, such risk evaluation can become more challenging. For instance, in 2013, MIT researchers found out that they were able to match up “publicly available DNA sequences” by using data about the participants that the “original researchers” had uploaded online.³ Consequently, in such cases, Dr. Buchanan believes it is crucial for IRBs “to have either a data scientist, computer scientist or IT security individual” involved.

Likewise, other types of research organizations such as, among others, open science repositories, could perhaps “pick up the slack” and handle more of these ethical questions. According to Michelle Meyer, a bioethicist at Mount Sinai, oversight must be assumed by someone but the best means is not likely to be an IRB because they do not have the necessary “expertise in de-identification and re-identification techniques”.

Different Perspectives on Big Data Research

A technology researcher at the University of Maryland4 named Dr. Katie Shilton recently conducted interviews of “20 online data researchers”. She discovered “significant disagreement” among them on matters such as the “ethics of ignoring Terms of Service and obtaining informed consent“. The group also reported that the ethical review boards they dealt with never questioned the ethics of the researchers, while peer reviewers and their professional colleagues had done so.

Beyond universities, tech companies such as Microsoft have begun to establish in-house “ethical review processes”. As well, in December 2015, the Future of Privacy Forum held a gathering called Beyond IRBs to evaluate “processes for ethical review outside of federally funded research”.

In conclusion., companies continually “experiment on us” with data studies. Just to name to name two, among numerous others, they focus on A/B testing5 of news headings and supermarket checkout lines. As they hire increasing numbers of data scientists from universities’ Ph.D. programs, these schools are sensing an opportunity to close the gap in terms of using “data to contribute to public knowledge”.

My Questions

Would the companies, universities and professional organizations who issue and administer ethical guidelines for big data studies be taken more seriously if they had the power to assess and issue public notices for violations? How could this be made binding and what sort of appeals processes might be necessary?

At what point should the legal system become involved? When do these matters begin to involve civil and/or criminal investigations and allegations? How would big data research experts be certified for hearings and trials?

Should teaching ethics become a mandatory part of curriculum in data science programs at universities? If so, should the instructors only be selected from the technology industry or would it be helpful to invite them from other industries?

How should researchers and their employers ideally handle unintended security and privacy breaches as a result of their work? Should they make timely disclosures and treat all inquiries with a high level of transparency?

Should researchers experiment with open source methods online to conduct certain IRB functions for more immediate feedback?

It is likely – – or if it isn’t, it should be – – a universal truth that everyone loves clean clothes but no one likes doing the laundry. I have arrived at this conclusion through many years of my own thoroughly unscientific observations in the laundry room in my apartment building. (My other research project is focused upon discovering the origin of the rift in the time and space continuum where stray socks always seem to disappear into in the washers and dryers.)

This ages old situation might be about to change based upon an interesting new development. This story is neither made from whole cloth nor a fabric-ation.

A group of scientists in Australia claim to have discovered a means to keep clothes clean by treating them with nano-size particles of two common metals and then exposing the fabric to sunlight. This could perhaps one day mean an end to washing clothes in the traditional soap and water manner. This research was reported in an article in the April 25, 2016 edition of The Wall Street Journal entitled An End to Laundry? The Promise of Self-Cleaning Fabric, by Rachel Pannett. I will summarize and annotate this story, and then pose several of my own questions about this, well, material.

Dry Cleaning

Rajesh Ramanathan, a postdoctoral fellow at RMIT University in Melbourne, Australia, explained the basic principal being tested: Minute flecks of copper and silver (called nanostructures), are embedded into cotton fabrics that, when exposed to sunlight, generate small amounts of energy “that degrade organic matter ” on the cloth in about six minutes. He and his team are conducting their work at the Ian Potter NanoBioSensing Facility, within RMIT.

Dr. Ramanathan characterized the team’s work as being in its early stages and involving “nano-enhanced fabrics” with the “ability to clean themselves”. The silver and copper do not alter the fabric in any way and remain embedded even when rinsed in water. As a result, their self-cleaning abilities will persist in successive multiple cleanings.

While encouraging no one to get rid of their washing machines just yet, he does believe that his team’s work “lays a strong foundation” for additional advancements in creating “fully self-cleaning textiles”.

To date, the research team has been testing their fabrics with organic dyes and artificial light. Next they are planning experiments with “real world stains” such as ketsup and wine in an effort to measure how long it will take them to “degrade in natural sunlight”. Additional planed testing will be to see how the nanostructures affect odors in the fabrics.

Spin Cycle

However, another scientist named Christopher Sumby, an associate professor in chemistry and physics at the University of Adelaide, expressed his reluctance at talking about self-cleaning fabrics “at this stage”.

Nonetheless, this experimental new process that use silver and copper, are two “commonly used” chemical catalysts and are “relatively cheap”. Two of the challenges currently facing the research team are how to scale up production of these nanostructures and “how to permanently attach them to textiles”. They are using cotton in their work because it has “a natural three-dimensional structure” that enables the nanostructures to embed themselves and absorb light. They have also found that this works well in removing organic stains from polyester and nylon.

Dr. Ramanathan said that a variety of industries, including textile manufacturers, have expressed their interest to his team. He believes that to enable them to commercialize their process, they would need to make sure the nanostructures can “comply with industry standards for clothing and textiles”.

My Questions

What would be the measurable benefits to the environment and energy savings if the needs for electric washers and dryers was significantly reduced by self-cleaning fabrics? Should the researchers use this prospect to their advantage in seeking regulatory approval and additional financing?

Although using sunlight, which is free and abundant across the entire world, would be the most renewable and environmentally sound source of energy for this, could the process also be extended for use with artificial light (as is currently being used in the team), for instances where sufficient sunlight becomes unavailable due to weather conditions or other environmental factors?

Could this process also be adapted to other forms of porous materials such as wood, paper, and plastics? For example, if people go outside for a picnic, could they could theoretically clean up the table, food containers and paper plates just by leaving them in the sun and then reusing them later? This might further cut down on the volumes of these materials being thrown in the trash or else being sent for recycling.

What other entrepreneurial opportunities might arise if this process becomes commercialized?