News:

Daffodil International University Forum contains information about Open Text material, which is only intended for the significant learning purposes of the university students, faculty, other members, and the knowledge seekers of the entire world and is hoped that the offerings will aide in the distribution of reliable information and data relating to the many areas of knowledge.

Researchers from MIT and Massachusetts General Hospital (MGH) have developed a predictive model that could guide clinicians in deciding when to give potentially life-saving drugs to patients being treated for sepsis in the emergency room.

Sepsis is one of the most frequent causes of admission, and one of the most common causes of death, in the intensive care unit. But the vast majority of these patients first come in through the ER. Treatment usually begins with antibiotics and intravenous fluids, a couple liters at a time. If patients don’t respond well, they may go into septic shock, where their blood pressure drops dangerously low and organs fail. Then it’s often off to the ICU, where clinicians may reduce or stop the fluids and begin vasopressor medications such as norepinephrine and dopamine, to raise and maintain the patient’s blood pressure.

That’s where things can get tricky. Administering fluids for too long may not be useful and could even cause organ damage, so early vasopressor intervention may be beneficial. In fact, early vasopressor administration has been linked to improved mortality in septic shock. On the other hand, administering vasopressors too early, or when not needed, carries its own negative health consequences, such as heart arrhythmias and cell damage. But there’s no clear-cut answer on when to make this transition; clinicians typically must closely monitor the patient’s blood pressure and other symptoms, and then make a judgment call.

In a paper being presented this week at the American Medical Informatics Association’s Annual Symposium, the MIT and MGH researchers describe a model that “learns” from health data on emergency-care sepsis patients and predicts whether a patient will need vasopressors within the next few hours. For the study, the researchers compiled the first-ever dataset of its kind for ER sepsis patients. In testing, the model could predict a need for a vasopressor more than 80 percent of the time.

Early prediction could, among other things, prevent an unnecessary ICU stay for a patient that doesn’t need vasopressors, or start early preparation for the ICU for a patient that does, the researchers say.

“It’s important to have good discriminating ability between who needs vasopressors and who doesn’t [in the ER],” says first author Varesh Prasad, a PhD student in the Harvard-MIT Program in Health Sciences and Technology. “We can predict within a couple of hours if a patient needs vasopressors. If, in that time, patients got three liters of IV fluid, that might be excessive. If we knew in advance those liters weren’t going to help anyway, they could have started on vasopressors earlier.”

In a clinical setting, the model could be implemented in a bedside monitor, for example, that tracks patients and sends alerts to clinicians in the often-hectic ER about when to start vasopressors and reduce fluids. “This model would be a vigilance or surveillance system working in the background,” says co-author Thomas Heldt, the W. M. Keck Career Development Professor in the MIT Institute of Medical Engineering and Science. “There are many cases of sepsis that [clinicians] clearly understand, or don’t need any support with. The patients might be so sick at initial presentation that the physicians know exactly what to do. But there’s also a ‘gray zone,’ where these kinds of tools become very important.”

Co-authors on the paper are James C. Lynch, an MIT graduate student; and Trent D. Gillingham, Saurav Nepal, Michael R. Filbin, and Andrew T. Reisner, all of MGH. Heldt is also an assistant professor of electrical and biomedical engineering in MIT’s Department of Electrical Engineering and Computer Science and a principal investigator in the Research Laboratory of Electronics.

Other models have been built to predict which patients are at risk for sepsis, or when to administer vasopressors, in ICUs. But this is the first model trained on the task for the ER, Heldt says. “[The ICU] is a later stage for most sepsis patients. The ER is the first point of patient contact, where you can make important decisions that can make a difference in outcome,” Heldt says.

The primary challenge has been a lack of an ER database. The researchers worked with MGH clinicians over several years to compile medical records of nearly 186,000 patients who were treated in the MGH emergency room from 2014 to 2016. Some patients in the dataset had received vasopressors within the first 48 hours of their hospital visit, while others hadn’t. Two researchers manually reviewed all records of patients with likely septic shock to include the exact time of vasopressor administration, and other annotations. (The average time from presentation of sepsis symptoms to vasopressor initiation was around six hours.for more visit : http://news.mit.edu/2018/machine-learning-sepsis-care-1107

WHAT THE RESEARCH IS:Zero-shot learning (ZSL) is a process by which a machine learns to recognize objects it has never seen before. Researchers at Facebook have developed a new, more accurate ZSL model that uses neural net architectures called generative adversarial networks (GANs) to read and analyze text articles, and then visually identify the objects they describe. This novel approach to ZSL allows machines to classify objects based on category, and then use that information to identify other similar objects, as opposed to learning each object individually, as other models do.

HOW IT WORKS:Researchers trained this model, called generative adversarial zero-shot learning (GAZSL), to identify more than 600 classes of birds across two databases containing more than 60,000 images. It was then given web articles and asked to use the information there to identify birds it had not seen before. The model extracted seven key visual features from the text, created synthetic visualizations of these features, and used those features to identify the correct class of bird.

Researchers then tested the GAZSL model against seven other ZSL algorithms and found it was consistently more accurate across four different benchmarks. Overall, the GAZSL model outperformed other models by between 4 percent and 7 percent, and in some cases by much more.

WHY IT MATTERS:To become more useful, computer vision systems will need to recognize objects they have not specifically been trained on. For example, it is estimated that there are more than 10,000 living bird species, yet most computer vision data sets of birds have only a couple hundred categories. This new ZSL model, which has been open-sourced, has been shown to produce better results and offers a promising path for future research into machine learning. Much of the research into AI remains foundational, but work that improves how systems are able to understand text and correctly identify objects continues to lay the groundwork for better, more reliable AI systems.

In the most recent edition of The Economist, an article titled “New schemes teach the masses to learn AI” appeared. The article profiles the efforts of fast.ai, a Bay Area non-profit that aims to demystify deep learning and equip the masses to use the technology. I was mentioned in the article as an example of the success of this approach — “A graduate from fast.ai’s first year, Sara Hooker, was hired into Google’s highly competitive ai residency program after finishing the course, having never worked on deep learning before.”

I have spent the last few days feeling uneasy about the article. On the one hand, I do not want to distract from the recognition of fast.ai. Rachel and Jeremy are both people that I admire, and their work to provide access to thousands of students across the world is both needed and one of the first programs of its kind. However, not voicing my unease is equally problematic since it endorses a simplified narrative that is misleading for others who seek to enter this field.

It is true that I both attended the first session of fast.ai and that I was subsequently offered a role as an AI Resident at Google Brain. Nevertheless, attributing my success to a part-time evening 12-week course (parts 1 and 2) creates the false impression of a quick Cinderella story for anyone who wants to teach themselves machine learning. Furthermore, this implication minimizes my own effort and journey.

For some time, I have had clarity about what I love to do. I was not exposed to either machine learning or computer science during my undergraduate degree. I grew up in Africa, in Mozambique, Lesotho, Swaziland and South Africa. My family currently lives in Monrovia, Liberia. My first trip to the US was a flight to Minnesota, where I had accepted a scholarship to attend a small liberal arts school called Carleton College. I arrived for international student orientation without ever having seen the campus before. Coming from Africa, I also did not have any reference point for understanding how cold Minnesota’s winters would be. Despite the severe weather, I enjoyed a wonderful four years studying a liberal arts curriculum and majoring in Economics. My dream had been to be an economist for the World Bank. This was in part because the most technical people I was exposed to during my childhood were economists from organizations like the International Monetary Fund and the World Food Program.

I decided to delay applying for a PhD in economics until a few years after graduation, instead accepting an offer to work with PhD economists in the Bay Area on antitrust issues. We applied economic modeling and statistics to real world cases and datasets to assess whether price fixing had taken place or to determine whether a firm was misusing its power to harm consumers.

First Delta Analytics Presentation to Local Bay Area Non-Profits. Early 2014.A few months after I moved to San Francisco, myself and some fellow economists (Jonathan Wang, Cecilia Cheng, Asim Manizada, Tom Shannahan, and Eytan Schindelhaim) started meeting on weekends to volunteer for nonprofits. We didn’t really know what we were doing, but we thought offering our data skills to non-profits for free might be a useful way of giving back. We emailed a Bay Area non-profit listserv and were amazed by the number of responses. We clearly saw that many non-profits possessed data, but they were uncertain on how to use it to accelerate their impact. That year, we registered as a non-profit called Delta Analytics and were joined by volunteers that worked as engineers, data analysts and researchers. Delta remains entirely run by volunteers, does not have any full time staff, and offers all engagements with non-profits for free. By the time I applied to the Google AI Residency, we had completed projects with over 30 non-profits.

Second cohort of Delta Analytics Volunteers. 2016.Delta was a turning point in my journey because the data of the partners we worked with was often messy and unstructured. The assumptions required to impose a linear model (such as homoscedasticity, no autocorrelation, normal distribution) were rarely present. I saw first-hand how linear functions, a favorite tool of economists, fell short. I decided that I wanted to know more about more complex forms of modeling.

I joined a startup called Udemy as a data analyst. At the time, Udemy was a 150-person startup that aimed to help anyone learn anything. My boss carved out projects for me that were challenging, had wide impact and pushed me technically. One of the key projects I worked on during my first year was collecting data, developing and deploying Udemy’s first spam detection algorithm.

Working on projects like spam detection convinced me that I wanted to grow technically as an engineer. I wanted to be able to iterate quickly and have end-to-end control over the models I worked on, including deploying them into production. This required becoming proficient at coding. I had started my career working in STATA (a statistical package similar to MATLAB), R, and SQL. Now, I wanted to become fluent at Python. I took part-time night classes at Hackbright and started waking up at 4 am most days to practice coding before work. This is still a regular habit, although now I do so to read papers not directly related to my field of research and carve out time for new areas I want to learn about.

After half a year, while I had improved at coding, I was still not proficient enough to interview as an engineer. At the time, the Udemy data science team was separate from my Analytics team. Udemy invested in me. They approved my transfer to engineering where I started as the first non-PhD data scientist. I worked on recommendation algorithms and learned how to deploy models at scale to millions of people. The move to engineering accelerated my technical growth and allowed me to continue to improve as an engineer.

Udemy data team.In parallel to my growth at Udemy, I was still working on Delta projects. There are two that I particularly enjoyed, the first (alongside Steven Troxler, Kago Kagichiri, Moses Mutuku) was working with Eneza Education, a ed-tech social impact company in Nairobi, Kenya. Eneza used pre-smartphone technology to empower more than 4 million primary and secondary students to access practice quizzes by mobile texting. Eneza’s data provided wonderful insights into cell phone usage in Kenya as well as the community’s learning practices. We worked on identifying difficult quizzes that deterred student activity and improved tailoring pathways to individual need and ability. The second project was with Rainforest Connection (alongside Sean McPherson, Stepan Zapf, Steven Troxler, Cassandra Jacobs, Christopher Kaushaar) where the goal was to identify illegal deforestation using streamed audio from the rainforest. We worked on infrastructure to convert the audio into spectrograms. Once converted, we structured the problem as image classification and used convolutional neural networks to detect whether chainsaws were present in the audio stream. We also worked on models to better triangulate the sound detected by the recycled cellphones.

You’ve probably heard that we’re in the midst of an A.I. revolution. We’re told that machine intelligence is progressing at an astounding rate, powered by “deep learning” algorithms that use huge amounts of data to train complicated programs known as “neural networks.”

Today’s A.I. programs can recognize faces and transcribe spoken sentences. We have programs that can spot subtle financial fraud, find relevant web pages in response to ambiguous queries, map the best driving route to almost any destination, beat human grandmasters at chess and Go, and translate between hundreds of languages. What’s more, we’ve been promised that self-driving cars, automated cancer diagnoses, housecleaning robots and even automated scientific discovery are on the verge of becoming mainstream.

The Facebook founder, Mark Zuckerberg, recently declared that over the next five to 10 years, the company will push its A.I. to “get better than human level at all of the primary human senses: vision, hearing, language, general cognition.” Shane Legg, chief scientist of Google’s DeepMind group, predicted that “human-level A.I. will be passed in the mid-2020s.”

As someone who has worked in A.I. for decades, I’ve witnessed the failure of similar predictions of imminent human-level A.I., and I’m certain these latest forecasts will fall short as well. The challenge of creating humanlike intelligence in machines remains greatly underestimated. Today’s A.I. systems sorely lack the essence of human intelligence: understanding the situations we experience, being able to grasp their meaning. The mathematician and philosopher Gian-Carlo Rota famously asked, “I wonder whether or when A.I. will ever crash the barrier of meaning.” To me, this is still the most important question.

ADVERTISEMENT

The lack of humanlike understanding in machines is underscored by recent cracks that have appeared in the foundations of modern A.I. While today’s programs are much more impressive than the systems we had 20 or 30 years ago, a series of research studies have shown that deep-learning systems can be unreliable in decidedly unhumanlike ways.

I’ll give a few examples.

“The bareheaded man needed a hat” is transcribed by my phone’s speech-recognition program as “The bear headed man needed a hat.” Google Translate renders “I put the pig in the pen” into French as “Je mets le cochon dans le stylo” (mistranslating “pen” in the sense of a writing instrument).