Technology Implementation in Education

Silicon Valley tech giants and Detroit automakers have to convince people to trust self-driving cars before they can sell the futuristic technology to customers. That may prove tricky considering the public’s lingering fears and concerns regarding self-driving cars. A recent AI-assisted analysis of more than one trillion social posts revealed that scared-face emoticons related to self-driving cars rose from 30 percent of all emoticons used on the topic to 50 percent by 2016. Top concerns mentioned in social media included self-driving car fears of being hacked and “robot apocalypse” scenarios of technological change.

It would be silly to interpret “scared face” emoticons and emoji posted on online as being fully representative of public attitudes toward self-driving cars. But it’s still a useful sign of the public relations challenge facing companies hoping to sell the self-driving car future to the broader public–especially given that about 70 percent of all Americans use some form of social media. The recent social media findings by Crimson Hexagon, an analytics company based in Boston, also generally line up with previous studies on public attitudes toward self-driving cars. Any company that wants to sell a positive image of self-driving cars will almost inevitably have to confront online narratives of nightmare hacking scenarios and doomsday visions.

Crimson Hexagon’s report looked at one trillion social posts from sites such as Twitter, Facebook, Instagram, Reddit, and online forums, as well as some car-specific sites such as Autotrader and Edmunds. The company performed the analysis using machine learning–a fairly common AI technology nowadays–to sift through patterns in words and emoticons within the social media posts. The machine learning AI trained specifically on using natural language processing and was also able to help identify some of the emotional sentiment behind certain words or phrases.

Concerns about self-driving cars being vulnerable to hacker attacks seemed fairly prominent with 18,000 mentions. These self-driving car fears were often driven by mainstream news reports that discussed hacking vulnerabilities and self-driving car safety. But doomsday fears of technological change that revolved around “apocalypse, doomsday and the destruction of humanity” came in close behind with 17,000 mentions.

The online talk was not all doom and gloom surrounding self-driving cars. About 6,000 social posts focused on the positive side of self-driving cars as a “technological revolution that harnesses big data and machine learning.” Another 7,000 social posts discussed self-driving cars as a possible solution to traffic jams and highway congestion even as they also featured angry venting. And 4,000 social posts talked up the innovation behind self-driving cars and their awe of the entrepreneurs and engineers developing such technologies.

A new proof-of-concept technology from Carnegie Mellon University turns everyday objects into touch interfaces with() an array of electrodes. Walls, guitars, toys and steering wheels come alive with touch sensitivity in their video, and it seems that the possibilities are pretty much endless. What could be next? Grocery store aisles? Whole buildings? Other people? Cell phones?

The design is called Electrick, and it comes from Carnegie Mellon’s Future Interfaces Group and takes advantage of the same principle that your smartphone screen. Because our skin is conductive, when we touch a surface with electricity running through it we alter the electric field in a predictable way. By coating objects with electrically conductive materials and surrounding them with electrodes, the team can triangulate the position of a finger based on fluctuations in the field. Combined with a microprocessor, they can train their program to translate swipes and taps into commands.

They experimented with a few different application methods. Vacuum forming works for simple shapes, while a spray-on version coats even irregular objects, such as a guitar or a miniature Yoda head. Materials can also be custom molded or 3-D printed, and it appears that Electrick even works with Play-doh and jello.

Some of the more practical applications include prototyping controller designs and modifying laptops and surfaces to run programs with a single touch, but the sky is really the limit here. Turn on your lights with the refrigerator. Play Halo with your coffee table. Change the channel with your cat (maybe not). You can imagine a future where any surface is a potential control device — and the attendant embarrassment when sitting down in the wrong place causes the blender to erupt.

Their system is low-cost and widely applicable, they say, and the only downside at the moment is that the presence of an electromagnetic field from other powered objects nearby can interfere with the accuracy of the system. They are currently working on ways to get around that.

A four-wheeled drone’s first aerial package delivery test showed off a special touch by also driving up to the doorstep of its pretend customer. That capability to deliver by both air and land makes the Panther drone an unusual competitor in the crowded drone delivery space. But the drone’s limited delivery range may pose a challenge in competing against the delivery drones of Google and Amazon.

Unlike most delivery drones designed purely for flight, the Panther drone resembles a four-wheeled robot with six rotors extending out from its sides. That design leverages the earlier “flying car” development efforts of Advanced Tactics Inc., a company based in Southern California. Previously, Advanced Tactics spent time developing it's “Black Knight Transformer” flying car with U.S. military missions in mind. The Panther drone appears to be a miniaturized version of the larger Transformer with commercial drone delivery as one of the several new possible roles.

“The Panther can fly at over 70 mph and has a flight time with a five-pound package of well over six minutes,” says Don Shaw, CEO of Advanced Tactics Inc. “With a two-pound package and larger battery, it can fly well over nine minutes.”

Panther Drone Tradeoffs

The good news for the Panther’s delivery drone aspirations is that its versatility could make it easier to deliver packages. Delivery drones will eventually face the challenge of navigating neighborhoods with obstacles such as trees and power lines. In addition, drone developers must figure out how the drones will safely deliver packages into the hands of customers without risking any drone-on-human accidents.

Some delivery drone efforts such as Google’s Project Wing have attempted workaround solutions such as lowering burrito deliveries to the ground with a cable. By comparison, the Panther could simply land in any open area—such as on a local road—and then drive to the doorstep of customers. It could even drive inside the doorways of businesses or access warehouses through their loading bays.

But air and ground versatility may have come at the cost of delivery range. That is because the ground mobility drivetrain adds extra weight that the Panther drone must expend battery power on lifting whenever it flies through the air. A future version of the Panther drone with a robotic arm to handle packages could potentially be heavier and shorten the delivery range even more. (On the other hand, Shaw pointed out that the drone can drive for hours on the ground at up to five miles per hour.)

The old science fiction fantasy of a flying car that both drives on the ground and flies in the air is unlikely to revolutionize daily commutes. Instead, Silicon Valley tech entrepreneurs and aerospace companies dream of electric-powered aircraft that can take off vertically like helicopters but have the flight efficiency of airplanes. The German startup Lilium took a very public step forward in that direction by demonstrating the first electric-powered jet capable of vertical takeoff and landing last week.

The Lilium Jet prototype that made its maiden debut resembles a flattened pod with stubby thrusters in front and a longer wing with engines in the back. The final design concept shows two wings hold a combined 36 Electric turbofan engines that can tilt to provide both vertical lifting thrust and horizontal thrust for forwarding flight. Such electric engines powered by lithium-ion batteries could enable a quieter breed of aircraft that someday cut travel times for ride-hailing commuters from hours to minutes in cities such as San Francisco or New York. On its website, Lilium promises an air taxi that could eventually carry up to five people at speeds of 190 miles per hour: about the same speed as a Formula One racing car. And it’s promising that passengers could begin booking the Lilium Jet as part of an air taxi service by 2025.

“From a technology point of view, there is not a challenge that cannot be solved,” says Patrick Nathen, a cofounder, and head of calculation and design for Lilium. “The biggest challenge right now is to build the company as fast as possible in order to catch that timeline.”

Nathen and his cofounders met just three and a half years ago. But within that short time, they put together a small team and began proving their dream of an electric jet capable of vertical takeoff and landing (VTOL). Lilium began with seed funding from a tech incubator under the European Space Agency but has since attracted financial backing from private investors and venture capital firms.

Getting Lilium off the ground probably would not have been possible just five years ago, Nathen says. But the team took full advantage of the recent technological changes that have lowered the price on both materials—such as electric circuits and motors—and manufacturing processes such as 3D printing. Lower costs enabled Lilium to quickly and cheaply begin assembling prototypes to prove that their computer simulations could really deliver on the idea of an electric VTOL jet.

The thrill of a crime story is the unfolding of “whodunnit,” often against a backdrop of very little evidence. Positively identifying a suspect, even with a photo of her face, is challenging enough. But what if the only evidence available is a grainy image of a suspect’s hand?

Thanks to a group at the University of Dundee in the UK, that’s enough information to positively ID the perp.

The Centre for Anatomy and Human Identification (CAHID) can assess vein patterns, scars, nail beds, skin pigmentation and knuckle creases from images of hands to show, with high reliability, that police got the right person in several very serious court cases in the UK. CAHID specializes in the human identification and was also the group that famously reconstructed King Richard III’s face after his body was found in a car park in Leicester in 2012.

In the Dark

The technique was born in 2006 when local police came to the team with a Skype video recorded in the dark, which had been languishing on their desks for some time. The dark recording conditions meant that images were taken in infrared light, and just a shot of a hand and forearm were in view. That was enough for the team to match the superficial vein patterns in both the offender and suspect with high reliability.

“The infrared light interacts with the deoxygenated blood in the veins so you can see them as black lines,” says Professor Dame Sue Black, who led the research. “You are actually seeing the absorption of the infrared light into the deoxygenated blood.” Black is an expert in forensic anthropology who has been crucial in high-profile criminal cases in the UK and headed the British Forensic Team’s exhumation of mass graves in Kosovo in 1999.

Building a Research Basis

Since that first case in 2006, CAHID has used the method in roughly 30 or 40 cases per year, and the team has also applied this procedure to intelligence and counter terrorism work. They have been hard at work trying to establish an academic explanation for their method over the past decade.

“It is important that we are able to say with some degree of reliability that we can exclude a suspect, or say there is a strong likelihood this is the same individual,” says Black.

In order to develop their technique further, CAHID created a database of 500 police officers’ arms and hands, taken in both visible and infrared light.

Heady times are on the horizon for brain research with efforts underway across the globe. As a leading partner in the U.S. BRAIN Initiative, launched in 2013, the National Science Foundation (NSF) is advancing fundamental research of the brain’s structure, activity, and function. NSF also plays an integral role in efforts to coordinate large brain projects in various countries with an aim of launching a Global Brain Initiative.

To mark Brain Awareness Week (March 13-19), the following images showcase some of the NSF-funded tools and insights that are deepening the understanding of the 3 pound parallel processor that sits atop our shoulders.

Simple brains offer insights into the more complex human brain. At left, pink and green highlight the fruit fly’s center of smell.

Not Such a Harebrained IdeaIn California tide pools, slithery sea hares like this one create ink-laden smoke screens for protection. In the lab, they create opportunities for discovery. As a model system, the humble sea hare’s brain is relatively simple, composed of about 20,000 neurons that grow throughout its lifetime.

Researchers are using the sea hare model to learn about individual cells function, discover the chemical pathways controlling various brain activities and to study how memories are processed and stored.

Understanding how to control specific chemicals could advance new ways to diagnose and treat chronic pain, drug addiction, and neurological diseases.

Quiet Body, Active MindNew optical imaging tools are providing unprecedented views of brain processes. One such technique produced these rainbow brain lobes of a mouse, another popular system researchers use to study the brain. The colors reflect the vivid synchronized patterns of neural activity in a mouse at rest.

This research marks the first time brain activity and blood flow was simultaneously imaged. The work provides a completely new view of brain activity and could lead to a better understanding of how various brain regions interact. The work also lays a foundation for pursuing new treatments for various neurological diseases.

A Bundle of NervesAbout a third of a millimeter in diameter, this mini-brain offers a 3-D alternative to cells growing in a petri dish. It’s cheap, costing about 25 cents to make, and relatively easy to grow. The brain begins forming a day after its seeds are planted and develop complex 3-D nerve networks within two to three weeks. A small sample of living tissue from a single rodent can make thousands of mini-brains.

The mini-brain lasts about a month and it could be used to study a range of challenges in neuroscience including transplanting nerve cells that could help treat Parkinson’s disease and studies on how adult nerve stem cells develop.

More than a year after detecting the first confirmed gravitational waves, researchers were busy at the Laser Interferometer Gravitational-wave Observatory (LIGO) in Livingston, La., upgrading the massive instrument. Building on lessons learned during that historic first run, they expect the improved detector will find more gravitational waves during the second observing run, which began Nov. 30.LIGO detects gravitational waves by splitting a powerful infrared laser beam in two, then sending the beams at right angles through tunnels to mirrors 2.5 miles away. The beams are recombined upon return. A gravitational wave will warp space and briefly change the relative distance between the mirrors and photo detector situated near the LIGO control room. The difference is astonishingly small, just 1/10,000 of a proton’s diameter, but it can be detected if the mirrors are isolated from all external sources of vibration.

Discover photo editor Ernie Mastroianni visited the facility in November as physicists and engineers were calibrating equipment.

Super Vacuum

Massive stainless steel tubes, vacuum equipment, and seismic isolation gear are prepared for installation at the corner station of the Laser Interferometer Gravitational-wave Observatory (LIGO) detector in Hanford, Washington. The facility is a near twin of the Livingston detector.

To insure the beam’s integrity, the laser travels through sealed stainless steel tubes, 1.2 meters wide, that hold a vacuum to just one trillionth of earth’s atmosphere, eight times less than open space. This vacuum, says LIGO spokesman William Katzman, has been maintained since 1999. It is the largest sustained ultra-high vacuum in the world and is necessary to prevent any air currents from deflecting the laser’s path.

Mirror, Mirror

Inside a stainless steel chamber, LIGO technicians examine the surface of one of the test mass mirrors that will reflect infrared laser light to measure the effect of gravity waves. After installation, all air was vacuumed from this chamber.

In the Control Room

Astrophysicist Stuart Aston monitors external vibrations on the LIGO test mass mirrors during an engineering run in November 2016. Aston's job is to keep optical components isolated from external vibration, and he was not happy as he scanned the data from LIGO’s control room.

“The multi-stage suspensions provide incredible levels of isolation from ground motion,” said Aston, but it wasn’t happening on this day. Less than a half-mile away, a logging crew was plowing a path through the forest, creating massive ground vibrations and swamping the gear that normally nullifies the noise.

Aston was soon driving the 2.5-mile distance to the detector’s far end to fix the glitch.

Cheers to the National Science Foundation (NSF) as it celebrates 67 years of service to the nation this month. Since its launch in 1950, NSF has provided funding for basic research that has laid the foundation for many of the technologies we rely on today such as mobile communications, the Internet, and GPS. The following images illustrate just a few of the areas touched by NSF research.

The first six images represent past or current technologies helped by NSF funding and the second six preview opportunities where NSF support will make a difference in the decades to come. Pictured here: Magnetic resonance imaging (MRI), a now-common medical imaging technique, advanced because of NSF funding over several decades. New MRI systems using magnets made of materials like these golden superconducting strands are increasing the power and precision of this important clinical tool

Connected from Coast to Coast

Texting, snapping and tweeting is all possible because of the internet. From humble beginnings as NSFNet in the academic research community to its current ubiquitous presence, the internet’s infrastructure grew in a relatively short period of time as private-sector providers scrambled to meet rising public demand for access and bandwidth. This growth will continue into the foreseeable future as the network evolves and more devices are brought online.

In this image, the nationwide rainbow represents the connections between routers in major urban areas.

Slipping the Surly Bonds

This glowing corridor represents some of the latest hardware for testing cloud computing, the practice of using a network of remote servers, rather than a local server or personal computer, to store, manage and process data. Flanking the aisle are the 200 or so servers that served as a testbed for Apt, the precursor to NSF’s Experimental cloud computing effort CloudLab. CloudLab and its twin testbed Chameleon are part of the NSFCloud program that is now creating opportunities to experiment with novel cloud architectures, applications, and security measures.

The experiments performed through CloudLab and Chameleon will lead to new capabilities for future clouds and a deeper understanding of cloud computing fundamentals.

Protecting Life and Property

When Mother Nature wields her fury through natural disasters such as tornadoes, hurricanes and earthquakes, weather forecasters and emergency personnel alert local communities based on input they’ve received from event modeling and simulations. With the help of NSF funding, these technologies can now provide highly localized, real-time data. In the case of a tornado, simulations like the one pictured here provide forecasters with valuable information such as wind speed, air flow, and pressure. The orange and blue wisps represent the rising and falling airflow around the tornado.

The image is modest, belying the historic import of the moment. A woman on a white sand beach gazes at a distant island as waves lap at her feet — the scene is titled simply “Jennifer in Paradise.”

This picture snapped by an Industrial Light and Magic employee named John Knoll while on vacation in 1987, would become the first image to be scanned and digitally altered. When Photoshop was introduced by Adobe Systems three years later, the visual world would never be the same. Today, prepackaged tools allow nearly anyone to make a sunset pop, trim five pounds or just put celebrity faces on animals.

Though audiences have become more attuned to the little things that give away a digitally manipulated image — suspiciously curved lines, missing shadows and odd halos — we’re approaching a day when editing technology may become too sophisticated for human eyes to detect. What’s more, it’s not just images either — audio and video editing software, some backed by artificial intelligence, are getting good enough to surreptitiously rewrite the mediums we rely on for accurate information.

The most crucial aspect of all of this is that it’s getting easier. Sure, Photoshop pros have been able to create convincing fakes for years, and special effects studios can bring lightsabers and transformers to life, but computer algorithms are beginning to shoulder more and more of the load, drastically reducing the skills necessary to pull such deceptions off.

In a world where smartphone videos act as a bulwark against police violence and relay stark footage of chemical weapons strikes, the implications of simple, believable image and video manipulation technologies have become more serious. It's not just pictured anymore — technology is beginning to allow us to edit the world.

It Begins With Pictures

A slew of projects, many in partnership with Adobe, are bringing intricate still image editing into the hands of amateurs. It’s easy to learn how to cut and paste in Photoshop or add simple elements, but these programs take it a step further.

One project from Brown University lets users change the weather in their photos, adding in rain, sunshine or changing seasons, with a machine learning algorithm. Trained on thousands of data points, the program breaks images into minute parts and edits each accordingly to make adjustments in lighting and texture that correspond to changing conditions.

Another project, this time from University of California, Berkeley, allows users to manipulate images wholesale, either with a set of simple tools and sliders or simply by drawing basic figures and letting the algorithm fill in the rest. The demo video shows one type of shoe morphing into another and mountains appearing from a simple line drawing. The program requires little more than basic computer skills.

Basically, technology is created to ease humans’ work. Nowadays technology has become the primary needs of humans. Even technology has been used in all aspects of human life, such as in the fields of medicine, communications, military, transportation, and education. From those various fields, the implementation of technology in education is still very limited. In reality, the implementation of technology is more widely used in the field of entertainment. This utilization will actually cause a lot of problems such as abuse and harm to the health. Nevertheless, if implemented in education, technology can assist and accelerate educational purposes. Here are some advantages of the application of technology in education.

Technology can help teachers teach

It can be a tool for teachers to convey their teaching materials to students. With the use of technology in the learning process, teachers can deliver course material with very easy and effective. Teachers who teach using the technology will usually be easier to achieve their learning goals.

Technology will spark teachers’ creativity

It can create creativity sense of teachers. They can be more creative in creating teaching methods. With it, they will be encouraged to be creative because the technology is just a tool that requires a person to operate it. Without teachers, it cannot affect in education optimally.

Technology helps students learn

Implementation of technology in teaching and learning will make students more interested in following lessons. If students are interested in what the teacher taught, it is not necessary to ask students to be serious in learning because students will be active automatically and will not feel the boredom due to learning.

Technology can create exciting learning activities

With the technology, teachers can create exciting learning atmosphere. It is very good to trigger the students’ understanding of learning so that they will understand quickly what is conveyed by the teacher. In addition, the students also will not feel bored because they would love to learn with learning methods that are interesting.

Make students easy to find source of learning

Technology information especially the internet, provides a wide range of learning resources that can be accessed by students anytime and anywhere. They can get all the references that they need for free. The more they learned from different sources, the smarter students will be.Technology can raise school standards

Schools which are using technology in the teaching process will improve their quality. The school will be favorite school and become a destination for students to study at the school.