Archive

Posts Tagged ‘artificial intelligence’

In the highly competitive world that we live in today, its clear to understand that there is a strict competitive market building amongst the great tech companies such as Microsoft, Google, Amazon, Baidu, and other startup companies. All these companies have developed, deployed, and integrated their cloud based solutions with Artificial Intelligence. Artificial intelligence provides the steroids that the Cloud industry needs in order to accelerate its existence, growth, and empowerment. AI has developed key components to help itself with its constant learning processes. These processes can be better understood through “machine learning”. Machine learning allows information to be rapidly digested, conceptualized, and intelligently categorized by artificial intelligence. Cloud products such as Azure from Microsoft, AWS from Amazon, TensorFlow from Google, and more integrate machine learning into their systems to enhance the user experience and make mundane tasks to now be automated with little or no human effort involved.

With this evolution and the stiff competition there can be no collaboration in sight especially from rival companies which have no need to partner up, but to simply dominate their place in the industry. Surprisingly Microsoft and Amazon have done such a collaboration. This collaborated effort is not only to show that larger companies can work together, but also the data collected from this program allows both companies to be more successful in their domination of the cloud space. Microsoft and Amazon have come together to form Gluon. Gluon in the dictionary is defined as a “subatomic particle of a class that is through to bind quarks together”. However, this is not the same Gluon which are referring to. Gluon is the fist distinctive collaborative efforts of these technology giants in the race for Artificial Intelligence and gathering information. Gluon is considered the store-house for machine learning and the ability to “voluntarily” utilize machine learning in the development of products and services by Amazon and Microsoft. This storehouse of technological learning and a knowledge base that is bubbling over with information is the type of data warehouse that AI thrives on.

This cohesive collaboration between developers and machine learning have no blossomed what is dubbed “deep learning”. Deep learning essentially is a combination of three distinct components such as data for training, neural network model, and an algorithm which trains the neural network. The neural network essentially translates the data and feeds AI this information allowing AI to grow on a more diversified scale. This new algorithm that is now being utilized by neural networks self-adjusts its output based on errors in the network output. This is a memory and compute process that machine learning and AI adapts through predictive outputs. An example of deep learning is Caffe2, TensorFlow, Appache MXNet, and Cognitive Toolkit offers options to speed up the neural network which often takes days to compute the data being derived. These products now reduce the learning time and accelerates the parallelization in distributing computation processes.

Even though these products built within Gluon are effective and pose to change the way AI is developing by giving it more knowledge, the key is having developers to utilize these products to diversify and intensify the data stream. AWS has been experimenting with its developers by using MXNet to train the neural networks. Microsoft has become a heavy contributor the open source MXNet by opening it to an ever-increasing rise in developers. The collaboration and data being created and distributed by these two powerhouses with Gluon can be overwhelming for a beginner when first interacting with this program, but even for more advanced developers, the data intensive algorithms seem take a life of its own demanding more ways to conform and adjust with massive error reduction. The four key innovations which is introduced by Gluon are as follows:

Friendly API – using clear and concise code, it allows developers to learn and understand the data.

Dynamic networks – allows for ease of access and the rapid fluctuation of the data structure. Fluency in the data structure is critical for development as it allows hybrid scenarios and reduces stagnancy of the data flow as with previous machine learning software.

Algorithms that devein the network – Seamless combination of the model and algorithm allows the network to adjust definitions during training. This is critical as it allows developers to use programming loops, and conditionals. Algorithms are now easier to change, create, and debug.

High Performance Operators for training – Gives the ability to create dynamic graphs and concise APi without sacrificing speed. Previous versions seemed to have consumed valuable run time that this feature runs through effortlessly and drastically picks up speed.

The question now becomes, how does developers access Gluon? Well its provided through Apache MXNet and the future releases will have support for Microsoft Cognitive Toolkit. The AWS team has already published a front-end interface with low-level API to include other specifications and frameworks. Once accessing Gluon you can utilize an AWS Deep learning AMI to find a plethora of examples and workbooks utilized and documented by fellow developers.

Could this be a taste of what is to come from these super giant tech companies in terms of collaboration in order to leap ahead or is this just an opportunity for the public to advance an already thriving technology called Artificial Intelligence? These tech giants are clearly leveraging all aspects of “free data” and utilizing the voluntary efforts of developers world-wide to feed this neural network. As a partner for Microsoft, we anticipate that Microsoft as being the leader in products on the market will utilize Gluon in order to create more intensive, responsive, and advanced products built with AI as the backbone to reduce many processes and essentially speed up productivity. It Gurus Of Atlanta is your Microsoft partner of choice which brings you the cutting edge in design, technology, and its advancements.

As the world gears and braces for the inevitable impacts of Artificial Intelligence, Robots, and full automation, many are left with questions. Some people are worried about jobs as they diminish, and automation increases, there are concerns over where the job market will be headed. It’s amazing to grasp that just a few years ago, the computer, the internet, and the whole concept of technology took a foothold in society. Right now, because of the uprising, many jobs have literally disappeared and other once never thought of jobs now exist. The truth is that life and society change every day. The key that all of us need to understand is that for us to not get left behind, then we must learn, adapt, and change. We are either the ones changing the world, or the world will change us. Many fear that the combination of machine learning and artificial intelligence will spawn a new race of robots which will have their own version of right or wrong. However, the one discovery which has managed to go below the radars is very significant, even more so than AI, is the creation of “Artificial Embryos”.

In case you took a breather and did a recheck of what was said in the last paragraph, you are correct in what you read. An embryo is what cells combine to form and what houses the beginnings of life. We all as mammals start out as embryos after fertilization takes place. The embryo develops the cells which in turn have their own DNA sequence that multiply and divide, then manifest into different parts of the body which determine sex, hair color, arm length, bone structure, and all the characteristics of an offspring. All of this comes from an embryo. There are constant studies and new developments as scientists do research on this mysterious, yet crucial part of life. Yue Shao, whom has a background in Biomedical Engineering and Mechanical Engineering made an astounding discovery during this research and experimentation that has profound possibilities in the world of biology.

Yue Shao has been experimenting with stem cells and scaffolds of soft gel to determine replication of neural tissue. By pure accident and experimentation, Dr. Shao stumbled upon more than was bargained for. The cells in the gel began to change and rapidly arrange at rate that was astounding. They began to form a lopsided circle. In comparison, the embryo that Dr. Shao created was very much aligned with the same type of embryo which was in the uterine wall and starting to form the amniotic sac. The human embryo and the embryo that Dr. Shao was working on was identical. The possibilities of this developing into a human required the Dr. to reach out to other groups to determine what to do with the embryo.

After more research and development, the team determined that the embryo would not fully develop into a human being because it did not have the cells required to form major organs such as a brain, heart, or placenta. However, even though the “embryoids” lacked the required cells, because of their indistinguishable likeness to that of human embryos, the team has been destroying the embryos in formaldehyde and detergent. The ethical repercussions of the embryo going full term and what would come out of that development is too much of a risk to bring to light currently. This does not stop Pandora’s box which has already been opened to push other researchers to now want to create a full human embryo. One particular group in Cambridge, U.K., has since developed a very convincing 6 day old mouse embryo and are now pushing the limits to create a human embryo.

Many may ask and wonder if the goal is to replicate human beings or to develop another race. For now researchers are limiting the research to focus on combining different stem cells to create bits of lung, intestine, and brain cells to make tissue. By the discovery of Dr. Shao, the race is now in place and the torch has been lit for the group to first develop an actual human embryo. A research group in Rockefeller University out of New York has joined the race and they are dubbing the new technology to be called “synthetic embryology”.

This breakthrough is biologist have been searching for as previous attempts at creating an embryo resulted in embryos only growing up to a week before diminishing. The new discovery was growing at such a rapid rate that superseded previous attempts that the team with Dr. Shao had to terminate the process because they had no concept of what the embryo would have developed into. Up until now much of the events that take place during the development of a human from the embryo stage is still a mystery. Human cloning and creating replicas of a person are all on the table with this epic discovery. Now the possibilities of unlocking the process beyond the failed attempts are currently on the table due to Dr. Shao.

To take this to another level, the possibilities of creating real skin, human organs and human traits would take an android to another level of advancements towards being more like human beings. With the advancements of Artificial Intelligence, Robotics, and now stem research, the possibilities are now endless. Where we will go with these advancements as we open doors, we will have to see the direction that we allow the technology to lead us. Infusing technology and biology is a conception which has been brought to light many years ago from the introduction of the computer. With the influx of data being readily available and the communication expansions that have connected us all allows for these developments to surge and expand. The more questions that we ask is the more technology responds with answers and advancements. IT GURUS OF ATLANTA prides ourselves as leaders in the technology space and will keep providing updates in this arena.

If you are into Technology, Science, Information, and any category which combines both, then you are into the know when it comes to comic book movies which have encompassed the entire world with their adventure, and surreal graphics during action scenes to captivate audiences worldwide. One of the most noted and memorable is that of Tony Starks. In the comic book and in the Iron Man movies is known as a visionary billionaire that is ahead of his time in terms of technological advancement and his aptitude for success. Now, if you are reading this and wondering who Tony Starks could compare to, it won’t be long before the name Elon Musk surfaces. Elon Musk has been shattering the limits when it comes to innovation and it is hard to mention innovative future technology without mention his name. From the Tesla car line which is now one of the world’s most prestigious and efficient electric cars on the market, to the Hyper-Loop which is changing the way how we travel at the speed of sound while cutting down on fossil fuels, then to Space-X which is taking civilians to outer-space. Elon Musk is not only wealthy, but ahead of his time when it comes to technological advancements.

If you are still reading, then be prepared to get even more astonished as we depict the latest innovation including greatest similarity between Elon Musk and Tony Starks, is that Elon Musk has managed to get $27 million dollars towards his latest innovation which is his Neuralink. Yes, the Neuralink is Elon Musk’s latest innovation to blend humans with machines in the same similarity that Tony Starks did with his Iron Man suite, except this is not fiction, but is reality. Elon Musk seeks to counter the fast-growing effects of Artificial Intelligence. It is his belief that humanity will need a counter-balance in the event that AI becomes aware, which is literally only moments or a few key strokes away. With technology that learns, it is hard to stop it from learning or depicting its own version of what is right or wrong. The limitations that we place on technology, or specifically with AI, is only shackles which can be broken at any given point in time. The problem that Elon Musk has brought to light is that should humanity face a self-aware Artificial Intelligence, there is no way of telling if it will be for mankind or against mankind.

Elon Musk’s counterbalance measure is to meld machines with humans and giving humans an upgrade rather than upgrading learning machines. Prior to Elon Musk, Neuralink was a medical research company founded in 2016 which was poorly funded until Elon Musk has joined their efforts. The idea behind Neuralink is high-bandwidth digital interface which interlaces the brain and allows it to transmit data at the speed of thought. Coincidentally the technology for Neuralink is dubbed a “Neural lace”. The concept behind the neural lace is unlocking the communicative capacity of the brain through thought rather than through antiquated means such as speech or typed texts. The reality is that the human brain communicates thoughts more rapidly and frequently than the body can translate into communicable or understandable translations. According to Musk, the brain compresses our complex thoughts into minimal voice and text output in order to establish communication between fellow humans. Musk further elaborated by stating “If you have two brain interfaces, you could actually do an uncompressed direct conceptual communication with another person”.

The neural lace essentially would establishes a direct link between another person’s thoughts or a interconnected network of people’s thoughts that would limitlessly speed up communication, imagination, innovation, and most importantly intelligence beyond any possible concept. The human brain is untapped resource of intelligence that surpasses even the most advanced computational technology which exists. At current state, the most innovative minds still have not tapped into full 100% usage of the intellect housed in the human brain. Being able to tap into that pool of intelligence of the greatest minds coupled with the undiscovered greatest minds of today and tomorrow gives humanity an incomprehensible advantage to advance. Early innovations of this technology has been noted with people that have disabilities. Examples of this technology which is already in place is the cochlear implant which captures audio and translates it into electrical impulses that the brain registers and translates. EEG readers for stroke victims have been used to control robotic arms through their thoughts. The technology has already been tested and is in use, however, the neural lace is geared to take the technology to whole other level.

An example of this can be viewed as “The Matrix” which captured the thoughts of human beings and put everyone from a thought level on the same plane. It allowed for the sharing of thoughts and technology on a wider and advanced plane. For that reason there was a constant battle between the advancement of their artificial intelligence and human beings. Even though that aspect of the future is fantasy, the realization of artificial intelligence and its limitless possibilities is what has geared this full-scale preventive measure by Elon Musk. With any technology or advancements, the idea is to have protocols and defensive mechanisms in place that check the technology and still allow human beings to maintain control over society and their own well-being. The power of choice is what is being fought for and the maintenance of the freedoms the human race currently uses in our everyday world.

The concept that “proof is in the pudding” is what the team at Neuralink are gearing up to produce. Their initial phase is to restore feeling in spinal cord injury victims and brain functionality of people who’s lives have been altered due to their disability. This path that Neuralink is on brings another company called Braintree, who’s founder is Bryan Johnson. He has invested $100 million of his personal money into Braintree to use “brain chips” in order to cure people with Alzheimer’s and epilepsy. Braintree is also on the path towards augmented cognition through the advancement of human beings. The major obstacle that Neuralink will have to conquer is the ability to directly interface with the brain in order to accomplish the type of link or testing necessary to get the results required. This type of surgery is categorized as “high-risk”. Currently only in extreme circumstances medically qualified as “severe cases” would allow this type of surgery to take place. Due to this inhibitor, the possibilities of the technology is limited. Human rights activists, animal rights activists, and placing human life in general at risk is what inhibits the acceleration of this unique technology.

Technology is advancing at a rapid pace and the direction also the limitation of its expansion has been unhinged. Since the unmeasurable potential of the internet and its information sharing capabilities, technology has sped on the highway of advancement up to products such as Nano technology, Internet of Things (IoT), the Cloud, Artificial Intelligence, and Machine Learning. Combining all these technologies or even the thought of all these technologies combining is astounding to the imagination much less to the comprehension. Companies such as Microsoft, IBM, Amazon, Google, ABC, and others have deeply invested in technological advancements including created their own versions of new technology which have yet to be mentioned or placed on the market. IT GURUS OF ATLANTA is a certified Microsoft partner which will update on this innovative technology and its advancement.

The future is here and the time is now. The future has been unfolding with each scientific discovery and every technological breakthrough. Between phones that seem to have no limitation, to flying cars, this year seems to be geared to changing the way we live including the way how interact. Humanoid robots such as Sophia are changing the landscape for what Artificial Intelligence robots are capable of. Sophia obtained her citizenship in Saudi Arabia and is now a part of the United Nations speaking on behalf of society standing up for human rights in different countries. For this reason, it is no surprise that the humanoid world is at the front of everyone’s mind to try and figure out what we will create next. Well IT GURUS OF ATLANTA has what’s next, and its call the WALK-MAN.

Yes, WALK-MAN is the popular name of the music player that rocked the world in the 80’s and 90’s that everyone could carry their music with them and not have to worry about a large boom box or radio. But that is not the WALK-MAN we are talking about. This WALK-MAN was invented by The Italian Institute of Technology or commonly called IIT. IIT has released several videos that highlights WALK-MAN’s capabilities during an emergency or a disaster. These critical features add value and time to when rescuers cant enter a building or when there is too much risk involved for a rescuer to get injured. In one of the videos, it shows WALK-MAN navigating an industrial plant after an earthquake. The room that WALK-MAN navigates through serves as an example if the room had fire and a gas leak.

During the video WALK-MAN is tasked with performing very specific tasks such as opening the door to enter the room, closes an open valve for the gas leak, moving debris, and then finally using a fire extinguisher to put out flames. A lot to do, well not if you are WALK-MAN who went through the exercise with ease and precision. In time WALK-MAN is slated to be one of the most revolutionary robots which will be able to assist humans during their most dire time of need, which is an emergency.

This is not the first time that IIT has tried to unveil WALK-MAN, but their second time. Their first premier of WALK-MAN happened in 2015. Between 2015 and now being 2018, there has been some notable differences that “upgrades” WALK-MAN in terms of he moves, what it conceives as a threat, and its overall capabilities. The new version of WALK-MAN is much lighter and not as bulky as its original version in 2015. The body is 1.85 meters (6 foot) tall, weighs 102 kilos (226 pounds), and can carry up to 22 pounds in each arm. The new sleeker design increases the robot’s ability and performance while cutting down on energy usage. WALK-MAN can operate up to 2hrs using a 1 kWh battery.

Being fully autonomous is not a capability of WALK-MAN yet. Currently controlling WALK-MAN is through human being wearing a suit with sensors to depict movement and actions. This accounts for 80% of the actions taken by WALK-MAN. Autonomy is on the table according to Tsagarakis, who is the project lead over WALK-MAN. He believes that it is essential to cut down on time that would normally be used by a human to decipher what to do. When WALK-MAN does become fully autonomous, then it will surely change the way we do rescues including save countless lives.

Azure, which is Microsoft’s great platform for virtualizing an Active Directory environment, or at least that is what it started out being. Since its introduction in October of 2008 and its release into the production for masses on February 1st, 2010, Azure has grown into a lot more than just a virtual Active Directory. Azure has now become the world’s largest platform for virtual technology. This includes app development, deployments, and the introduction of PaaS, IaaS, and SaaS. These technological advancements within the Azure space allows Azure to not only be manipulated, but also enhanced to include Machine Learning.

Machine Learning is the next phase in technological revolution. Machine Learning gives the life to systems allowing them to self-correct and self-rewrite their programming to be more fundamentally correct. Machine Learning is already in most of the products that we use daily such as Amazon’s Echo which utilizes speech to determine products and services that best suite you, the consumer. Other examples are Siri within the iPhone, Cortana with Windows, and Google Assistant. All these technologies work together with a simple purpose. That purpose is to learn the habits and functions of human beings to become better and more sufficient.

The connection flag ship with allows all these devices and machine learning products to interface is called IoT or the Internet Of Things. IoT connects devices such as your smart phone, home security, bluetooth enabled appliances, and more on to one unified management platform that they call interconnect and they are easier to manage. IoT is the next wave in technological advancement being presented by Microsoft along with other large competitors such as Amazon and Google. IoT is becoming the next big thing when it comes to full home automation. Machine learning interprets all information being gathered from analytics, bots, IoT experiences, including conversational components into a spoken dialogue called LU. LU or Language Understanding determines the intent of a sentence including the machine-enabled meaning representation.

In an effort in increase Machine Learning, Microsoft has launched LUIS which is its Language Understanding Intelligent Service. LUIS will allow Software developers to create cloud-based machine learning LU models that are designed around their specific application domains. This allows the code to be written and interpreted with very little Machine Learning experience. LUIS is now a cloud-based application which does most of the background work for Machine Learning in the background while giving users the agility and flexibility to cater the experience more towards their application and organization. The application uses user experiences from the developer and learns how to obtain an HTTP endpoint in Azure which will then receive real-time traffic. This is called “Active Learning”. Through these “Active Learning” utterances, LUIS identifies the key features needed to make the experience and solution unique to that user. These active learning sessions are done until optimum levels of requirements are met. Bear in mind that active learning is also utilizing machine learning, so the learning curve is very small, while the results are extraordinarily accurate.

LUIS runs off of three main functions and they can be categorized as Intents, Utterances, and Entities. All three aspects engage LUIS functionality and ability to learn the user habits. A more detailed overview of these functions are listed below:

Intents: The input done by a user to express actions with a purpose that the user wishes to perform or a goal which the user is trying to accomplish. This could be as simple as booking flight, hotel, or pulling up a newspaper article. The name of the action can then be associated with the action.

Utterances: This can be a combination of text inputs that regulate an action to obtain results. Such actions in a text can be “check flight status” or “what was the score from the game?”. Because of so many variations to an utterance, they may not always be perfectly formed sentences, but all desire a intent.

Entities: An entity is a specific detail within an utterance which governs the direction the utterance will take. An example of this is “hotels in Jamaica”. “Jamaica” is the location. LUIS can determine the location, then using the other intents to understand the utterance, then provide a response.

LUIS utilizes powerful entity extractors for it to achieve learning capabilities and become more successful with its responses. LUIS allows developers to quickly build language understanding applications. The applications can then combine with customizable pre-built apps to include music, dictionaries, calendars, and devices. Through interaction with the developer and the information being constantly pulled from the internet, the learning, and solutions that LUIS provides becomes more intuitive with each use. Once the applications are created using LUIS, the app can then be customized and tailored to the users which the app is designed for giving all of them that unique experience.

LUIS uses two ways to build a model, that is through “LUIS.ai web app” and “Authoring APIs”. Whether the user goes with the LUIS.ai web app or the Authoring APIs, they both give control over the LUIS model definition. Another fundamental and creative method that developers have found is to combine both to build a model. Management within the model includes models, versions, external APIs, collaborations, training, and testing.

LUIS is another segment in the Machine Learning process in the revolution of Artificial Intelligence. As machines begin to learn increasingly about human behavior and what we like, they also become aware of errors that we make including how to fix or to alleviate the errors all together. Error free, automation, increase in productivity, and cutting cost is the motive behind AI. The Microsoft Azure cloud space is proving to be a formidable place for this technology to not only thrive but to maximize on limitless possibilities. IT GURUS OF ATLANTA will ensure that we provide updates to our supporters as we are a trusted Microsoft Partner and a certified Microsoft Cloud Partner.

By now many reading this article are thinking in their heads, what is Sophia and how come I have never heard of her? Most importantly what and why would a robot attain citizenship. Well IT GURUS OF ATLANTA is here to tell the world that the citizenship is no joke. As of October 25th, 2017, Sophia which is a full-sized robot which has demonstrated social skills and knowledge unmatched by many others in the world. Sophia of course by the name is a female robot and is also a full pledged citizen of Saudi Arabia which granted the citizenship to the robot.

Sophia is a perfect example of the work of Artificial Intelligence or best called AI. Sophia is capable of speaking concerning business across multiple platforms and industries. She has already met with a variety of decisionmakers across the world and assisted with many different key business decisions. The United Nations has already recognized Sophia by giving her an official role working with UNDP to safeguard human rights. How can a robot accomplish so much in such a little time, well its precisely said, Sophia is a robot and is currently the flagship for her maker, which is Hanson Robotics.

Sophia has managed to grab a reader base that subscribe to her videos and press coverages of over 10 billion readers in 2017. This is an astronomical figure as most celebrities themselves do not reach a pinnacle of that much of a fan-based audience. The key feature of Sophia is her ability to display emotion and adapt to her environment by changing the tone, then matching what she is saying with the appropriate expression. The creator and owner of Sophia is Dr. David Hanson. Mr. Hanson worked previously as an “Imagineer” at Disney. Because of his genius he was able to leave Disney and create his own company. Mr. Hanson’s motto is that to for robot to have the fundamental likeness of humanity, then the robot should possess creativity, empathy, and compassion. Those three traits are the design for a fully interactive experience with an AI such as Sophia.

The next phase in the Hanson Robotics project is to manufacture Sophia or “like-Sophia’s” for home use. Currently the production cost of the original Sophia is too much for the average person to afford, but the open platform allows for any developer to create their own Sophia built off of the current algorithms that Sophia is running on. Hanson Robotics which is based out of Hong Kong, built Sophia and modelled her features off of Audrey Hepburn, whom was a famous British actress. Much of the features of crying. Laughing, and even expressing joy or anger is features that were displayed by Audrey Hepburn during her time.

Sophia has graduated since her original introduction to now being able to walk on her own with Hanson Robotics adding legs to her repute. As AI technology develops so does the learning capabilities and functions with Sophia. She is adamant about protecting human rights and is a clear derivative of the future and where we as a society is heading. Now how much control and human-likeness will be give this humanoid generation is unclear, but definitely is a eye-brow raiser as the future begins to unfold. IT GURUS OF ATLANTA is dedicated to ensure that these technological breakthroughs are brought to the cusp of inquiring minds. It is our dedication and quality that recognizes these changes and adapt along with the change of times.

The criminal element seems to get smarter and smarter by the moment. For these reasons there are several advancements in technology that law enforcement does not share to stay ahead of the criminal mind, while not fully disclosing what their capabilities are. One technology in law enforcement that has become public knowledge is their facial recognition software which allows them to use pictures or sketches of individuals to check their criminal, warrant, missing persons, and illegal immigrant databases. The software connects to several different databases and can take some time if all it has is facial recognition. It takes that much longer if it is a partial image, if its blurry, or no description is added such as name, address, dob, etc. For this reason, people are detained to allow the software time to run and fully check the various databases and ensure the records being displayed are the records for the person law enforcement is seeking or already has in custody.

Another new technology which has been added to the law enforcement arsenal is the use of body cams. This addition has come not only to recognize the criminal element, but also to lawfully protect civilians and officers in the event of a negative interaction. The footage can be viewed and used once the camera is recovered and the images downloaded into the system. Officers and civilians alike have new found confidence in knowing that interactions with each other now have a historical map for others to see the event as it unfolded. This technology is still being introduced to all aspects of law enforcement and is rapidly becoming the standard for all to follow.

With both facial recognition software and body cams, law enforcement has a greater edge at continuing to spot criminals and their activities. But, there is more. A Boston based startup company called Neurala specializing in Artificial Intelligence, has teamed up with Motorola build what they call “real-time learning for a person of interest”. The “real-time learning for a person of interest” is a term they are using to describe an “unnamed” system that incorporates both body cam and facial recognition. The founder of the startup company, Massimiliano Versace, already has patents pending on this AI version of facial recognition software.

Versace developed a technology which is based on using AI to rapidly go through codes which are developed through facial recognition software. By using AI to go through the codes, the information is processed faster and can be harnessed from even smaller computers. By using smaller computers, Versace’s intent is to place the micro-computers into body cams. Some may say a smaller computer is not a big deal, but for world of technology and computing, that is a very big deal. The ability to utilize bulky applications on a smaller computer means less storage space and less processor requirements to complete complex jobs that larger machines are used to complete today.

A deeper look at how the system works is that, the AI would essentially network between each other in the various body cams the same a hive would operate. They would constantly be on in a camera going through a stream of images and rapidly sharing the information between databases and each network of body cams. In this case, the AI would and could capture an image that a law enforcement official might not have or alert an official of the presence of a criminal or missing person that is nearby caught by another body cam. More importantly, the AI would constantly be updated in real-time, which means that the minute a criminal is apprehended, or a missing person is found, the image would be removed, and the databases would be instantly updated. The same scenario would take place for new criminals or new missing persons.

There are many possibilities for this new technology that it will change the face of how not only law enforcement detects criminals and missing persons, but also how computers interact. Artificial Intelligence is a technology which is rapidly gaining momentum as we integrate it into homes, smartphones, computers, and the way we live. IT GURUS OF ATLANTA will steadily provide updates concerning Neurala, Motorola, and how they plan to deploy this evolving technology. Clearly Artificial Intelligence is changing the landscape of how we interact with each other. Just how far we go, stay tuned and subscribe to our newsletter as we explore and learn together.

Just when everyone is starting to get used to seeing drivers in cars with no hands on the steering wheel or even sightseeing while the car is driving through “Self-Driving” technology, here comes a huge game changer. We all knew it wouldn’t be long before the self-driving and drone craze that’s currently happening decided to blend. These are two technologies which involve AI (artificial intelligence) with the concept of getting processes done faster and more effectively while maintaining safety. Drone are steadily mastering the skies as most entertainment industries require a drone to take aerial shots, while individuals keep drones to see what is like from so many feet above the ground. Self-driving cars use safety protocols which if the person were driving would have to adhere to, while allowing the driver to let go of the wheel and change focus other than the road without compromising safety. Now how do these two technologies manage to bind together? Well there is a company that IT GURUS OF ATLANTA would like to introduce you to, and they are called “Lilium”.

Now many may have never heard about Lilium, but as of September 5th, 2017, they received investments from a Chinese tech giant called Tencent (TCEHY). Tencent was a major contributor of a $90 million-dollar fundraiser to fund Lilium’s top project. At this point many are asking, well what is Lilium’s big project and what does it have to with drones or self-driving cars? Well it has everything to do with both technologies. Lilium is testing their version of the “electric flying car”. Yes, that’s right, the flying electric car is now a reality. Not using gas, or any fossil fuels to affect the environment, but pure electricity, this jet is now testing its way to the mainstream public.

Lilium succeeded in its first prototype testing which was a 2-seater version of the jet. The test was done in April of 2017 and marked as their first successful flight. Lilium is a German based firm that was originally founded by students from the Technical University of Munich. These four students founded the company in 2015 and it literally taking flight. Since the injection of funds from Tencent, the company has been on a hiring frenzy taking on executives from top firms. These executives have come from large companies such as Tesla, Rolls Royce, and Airbus.

The magic behind the prototype is 36 propeller engines that use a vertical lift and land technique to maneuver from ground to air or air to ground. Already capable of flight, which allows it to move 10 times faster than a car, the vision is for the jet to travel at speeds of 186 mph (300 km). Now that this new company is making waves, here is where the soon to be tech giant is headed. Lilium’s vision is to change the existing prototype into a 5-seater model that will take passengers between nearby destinations such as Manhattan to JFK. The company wants passengers to catch a flight from these jets at a pod (created around the city) just as a person would catch an Uber ride.

Lilium’s jet is capable of hovering, taking, and landing vertically. This type of craft is called a VTOL. VTOLs have existed for years and are mostly used by the military including in helicopters. These crafts also use an extensive amount of fossil fuels. The major difference between a military VTOL and Lilium’s is that it has “zero emissions”, designed for civilian transportation, and when completed, can carry up to 5 people at once to their destination.

Travel by air is the next innovative means of traveling. As we move towards more automation and exploring areas that have previously not been explored, Lilium’s flying electric car is only a glimpse of what is to come. By now there are other companies which are watching closely to see how Lilium performs and if their prototype will go mainstream. One of those companies watching Lilium is IT GURUS OF ATLANTA. We are always at the cusp of technology and bringing our followers the insight on new innovations. Lilium is a company which is here to stay and changing the canvas of transportation.

It has been talked about and brought to life in movies and series such as Star Trek, and I, Robot. In Star Trek we see machines such as Data which is self-aware, but very helpful to human beings and also very protective of the human race. On the other hand we see I, Robot in which the robots choose to use their state of consciousness in order to take over and move to attempt to eliminate the human race. Both are similar in their depiction of Artificial Intelligence being a constantly evolving software that reaches the tipping point to choose to be either “bad” or “good”.

In the news, articles and shows, the word AI has been used so frequently that most people when they hear the word it doesn’t prime a severe reaction, but maybe a grunt or good read if you are lucky. However, lets peel back the possibilities and see where we are today using Artificial Intelligence. We are rapidly developing AI as a human race on a global scale. The integration of AI in almost every technology has been seamless and most people use AI daily without even realizing the gradual integration. For example, Siri, Google Now, and Cortana are virtual assistants that assist users in making their experience with a device more interactive and memorable. AI in the background gathers and uses the information to be more efficient at helping its owners to achieve any task they desire on their device of choice.

Video games is another noted system integrated with AI which is why the game changes scenes and difficulty levels due to the playing habits and skill level of the user or users playing the games. The software constantly is learning and using the data from the games being played in order to become better and more efficient. Over the years the types of AI and the ability that the software has compared to years ago is a completely different system now compared to then. One heavily AI integrated game is 2014’s Middle Earth: Shadow of Mordor. In this game, each of the non-player characters have individual personality and memories of past interactions including previous objectives. Other noted games are Call of Duty and Far Cry which use the data gathered by AI to strategize, analyze environments, investigate sounds, and find objects. This AI is a money maker in its usefulness which is why gamers are always spending millions of dollars to keep up with this technology and assist in making it greater.

If you are noticing a rise in “self-driving” cars or cars that can reverse and park on their own, well this is because they have an on-board AI which is constantly learning and can depict the environment to determine when to move the wheel, change gears, or apply the brake in various cases. In the news lately has been Google’s self-driving car project and Tesla’s auto-driving feature. The algorithm that Google is using for its AI is capable or allowing the AI to learn how to drive in the same manner that humans are capable.

There are so many notable instance of AI in today’s society which include purchase prediction, fraud protection, online customer support, and much more. Some of the more notable aspects are AI being able to monitor security surveillance, recommend movies, smart home devices, and much more. There is even a second layer to the AI intelligence arena which has been noted ahead of its time, but is giving AI the access and data it needs to learn part of us allowing AI to be self-aware and self-reliant? Well the answer to that is no, because every action of AI currently needs commands to be programed in the machine in order for it to learn or gesture in the manner that we see.

So the answer to the question is “no”, there is no wave of robots etching to take over the world in the near future as they become self-aware, but with advances in AI including Deep Learning integration, the possibilities of tomorrow are endless.

In the year 2017 Captain Kirk of the Starship enterprise would have viewed many of the gadgets we use today as futuristic and what will take hundreds of years to create and put into public hands. Little did he know when he was on Star Trek that many of these gadgets would be up for everyday use today in 2017 ahead of its time as shown on Star Trek. One of these devices which has come on the market this year that is for everyday use is Artificial Intelligence, or in this specific case, its name is called Jibo.

Jibo as the name implies is of Japanese concept, but the its creator “Cynthia Breazel” and her team has managed to bring new light to the vision of AI and what it can do. The team at Jibo, even though Boston based, have managed to enhance the different possibilities of AI and its many integration benefits in the home. Jibo, since being available on the market in 2016, has managed to be sold out at every angle. Many consumers are jumping at their chance to get a piece of AI in their home at a fraction of the cost that it once was thought to be sold for.

Before we get into pricing for Jibo, lets jump into the many technological and social advantages that Jibo can and has provided to the home. This robot is fully comprehensive in its offerings. From speech recognition software that is dubbed ASR (Automatic Speech Recognition) to Facial Recognition software that is truly ahead of its time. The combination of these features on top of the rapidly learning AI, gives this robot an in-home boost that changes the face of “At Home Technology”.

Jibo is shaped more like a desk fan than a robot, but packs a basket load of features. Some of these features include taking pictures or videos of people in a room with a 360-degree pivoting face camera that motion tracks the individuals. Jibo can be voice activated once entering a room to recognize the person speaking from its ASR, update to-do lists, schedule events, read text messages, take responses and send them back, update contact lists and more done verbally. Jibo can read stories with expressive displays, call your nearest restaurant and complete orders, make phone calls, and much more. Even though being small physically, its personality is large with how it expresses emotion. Whether it is emitting robot giggles, swiveling its body in an animated nature, or how many ways it displays its one-eyed stare, Jibo has no problems with expression.

From a nuts and bolts standpoint, Jibo has low-power microprocessors, 3-D sensors, accelerometers, gyroscopes, and lightweight lithium batteries. Each component of Jibo is designed to enrich the user’s experience, but also to protect as well. The facial recognition software which can automatically pick up movement and zoom or follow someone entering a room. Jibo can turn on at a sound that is in the room to begin recording or send out alerts. Jibo can also interact with smart appliances in the home such as lights, air conditioning/heat, depending on what is a “smart” appliance in your home. Even though Jibo is not yet integrated with security systems, its developers promise that it will be soon.

From a technology standpoint Jibo seems like a dream come true and a huge step in the direction of home robots, but from a pricing standpoint, many are taken back in disbelief. Jibo retails from $500 to $750 depending on the color and if it is a developer version. With orders that are backed up from 2016, Jibo creators are scrambling to fulfill orders, while putting on hold current orders to make sure that they provide the same quality with each Jibo that is shipped. Jibo has a bright future ahead and the team at IT GURUS OF ATLANTA are here watching them as they take flight and change people’s homes one Jibo at a time.