Archive

Posts Tagged ‘app’

Azure, which is Microsoft’s great platform for virtualizing an Active Directory environment, or at least that is what it started out being. Since its introduction in October of 2008 and its release into the production for masses on February 1st, 2010, Azure has grown into a lot more than just a virtual Active Directory. Azure has now become the world’s largest platform for virtual technology. This includes app development, deployments, and the introduction of PaaS, IaaS, and SaaS. These technological advancements within the Azure space allows Azure to not only be manipulated, but also enhanced to include Machine Learning.

Machine Learning is the next phase in technological revolution. Machine Learning gives the life to systems allowing them to self-correct and self-rewrite their programming to be more fundamentally correct. Machine Learning is already in most of the products that we use daily such as Amazon’s Echo which utilizes speech to determine products and services that best suite you, the consumer. Other examples are Siri within the iPhone, Cortana with Windows, and Google Assistant. All these technologies work together with a simple purpose. That purpose is to learn the habits and functions of human beings to become better and more sufficient.

The connection flag ship with allows all these devices and machine learning products to interface is called IoT or the Internet Of Things. IoT connects devices such as your smart phone, home security, bluetooth enabled appliances, and more on to one unified management platform that they call interconnect and they are easier to manage. IoT is the next wave in technological advancement being presented by Microsoft along with other large competitors such as Amazon and Google. IoT is becoming the next big thing when it comes to full home automation. Machine learning interprets all information being gathered from analytics, bots, IoT experiences, including conversational components into a spoken dialogue called LU. LU or Language Understanding determines the intent of a sentence including the machine-enabled meaning representation.

In an effort in increase Machine Learning, Microsoft has launched LUIS which is its Language Understanding Intelligent Service. LUIS will allow Software developers to create cloud-based machine learning LU models that are designed around their specific application domains. This allows the code to be written and interpreted with very little Machine Learning experience. LUIS is now a cloud-based application which does most of the background work for Machine Learning in the background while giving users the agility and flexibility to cater the experience more towards their application and organization. The application uses user experiences from the developer and learns how to obtain an HTTP endpoint in Azure which will then receive real-time traffic. This is called “Active Learning”. Through these “Active Learning” utterances, LUIS identifies the key features needed to make the experience and solution unique to that user. These active learning sessions are done until optimum levels of requirements are met. Bear in mind that active learning is also utilizing machine learning, so the learning curve is very small, while the results are extraordinarily accurate.

LUIS runs off of three main functions and they can be categorized as Intents, Utterances, and Entities. All three aspects engage LUIS functionality and ability to learn the user habits. A more detailed overview of these functions are listed below:

Intents: The input done by a user to express actions with a purpose that the user wishes to perform or a goal which the user is trying to accomplish. This could be as simple as booking flight, hotel, or pulling up a newspaper article. The name of the action can then be associated with the action.

Utterances: This can be a combination of text inputs that regulate an action to obtain results. Such actions in a text can be “check flight status” or “what was the score from the game?”. Because of so many variations to an utterance, they may not always be perfectly formed sentences, but all desire a intent.

Entities: An entity is a specific detail within an utterance which governs the direction the utterance will take. An example of this is “hotels in Jamaica”. “Jamaica” is the location. LUIS can determine the location, then using the other intents to understand the utterance, then provide a response.

LUIS utilizes powerful entity extractors for it to achieve learning capabilities and become more successful with its responses. LUIS allows developers to quickly build language understanding applications. The applications can then combine with customizable pre-built apps to include music, dictionaries, calendars, and devices. Through interaction with the developer and the information being constantly pulled from the internet, the learning, and solutions that LUIS provides becomes more intuitive with each use. Once the applications are created using LUIS, the app can then be customized and tailored to the users which the app is designed for giving all of them that unique experience.

LUIS uses two ways to build a model, that is through “LUIS.ai web app” and “Authoring APIs”. Whether the user goes with the LUIS.ai web app or the Authoring APIs, they both give control over the LUIS model definition. Another fundamental and creative method that developers have found is to combine both to build a model. Management within the model includes models, versions, external APIs, collaborations, training, and testing.

LUIS is another segment in the Machine Learning process in the revolution of Artificial Intelligence. As machines begin to learn increasingly about human behavior and what we like, they also become aware of errors that we make including how to fix or to alleviate the errors all together. Error free, automation, increase in productivity, and cutting cost is the motive behind AI. The Microsoft Azure cloud space is proving to be a formidable place for this technology to not only thrive but to maximize on limitless possibilities. IT GURUS OF ATLANTA will ensure that we provide updates to our supporters as we are a trusted Microsoft Partner and a certified Microsoft Cloud Partner.

The wave of the future keeps getting brighter and brighter as 2017 pushes forward. So many advances that IT GURUS OF ATLANTA has covered and have still yet to cover. One of the most recent innovations comes with car makers now speaking of Retinal Scans to replace conventional car keys. This innovation could revolutionize the way how car makers produce their lineup of cars and the way how consumers utilize cars in the future. This software is to be integrated as an app from the car maker and then integrated with the user’s smartphone hardware to make each user’s experience unique.

The concept uses NFC, which is called Near-field communications. Near-filed communications utilizes the retinal scan capabilities of devices such as the iPhone X which does facial recognition. The cameras and the algorithms utilized in devices such as the iPhone X allow the security and unique values needed for automakers to place the app as a car key replacement on the device. One of the major automakers that are pushing NFC is BMW which is noted to be one of the innovators that integrate technology with automobiles. Another concept which BMW is considering the use of the rear-view mirror to embed the security of NFC in order to gain access into the car’s systems. This would act as a secondary security feature on top of the initial retinal scan of the phone used to open the doors of the vehicle. Gentex, which is a maker for rear view mirrors is the primary company spoken of at a recent Frankfurt Auto Show to integrate this technology.

BMW board member, Ian Robertson, recently told Reuters, “People never take the key out of their pockets. So why do I need to carry it around? We are looking at whether it is feasible, and whether we can do it. Whether we do it right now or at some point in the future, remains to be seen.” BMW already setting the trend prior to integrating this technology by priming the market with the current car keys. BMW is one of the few car makers which offers an LCD wireless key. What this does is integrate the wireless capabilities of a cell phone and a key fob. BMW’s app currently allows car owners to unlock their car, remote start to warm up the car or cool it down prior to the user entering the vehicle. However, as a security feature, the car will not move without the physical key being present inside the vehicle.

Most of the backlash has come from security experts that question the thought of the car security being hacked and made easy for car thefts to be done. The more mobile and internet based access that physical objects have is a double-edged sword. There is the convenience of access, but there is also the security risk of making some tangible accessible by the intangible threats of hacking. Hackers are the most advanced criminal wave threatening the livelihood of society. Now financial, educational, government, and now automaker institutions join the ranks of other entities that are at risk to this growing threat by adding convenience to their products and services. However, with an ever-changing landscape of technology, if other carmakers are integrating the technology, then the other carmakers have little options but to join the standard to maintain their competitive edge.

Tesla the innovator in the electric car has already stepped up to the plate by offering along with their app, Tesla sends out NFC enabled key cards that allows the car to be unlocked or started by a single tap of the card. Tesla utilized Bluetooth technology that does the unlocking and starting of the vehicle. Tesla continues to innovate and many look towards their much anticipated 2018 models to be released. Tesla always raises the bar in terms of what is thought to be practical versus what is envisioned. Industry car makers and accessories manufacturers such as Gentex are envisioning the future with much successful technology implementations.

Gentex, which owns HomeLink, is in the process of integrating biometrical algorithms in order to allow access to cars. They are partnering with Delta ID to use their current ActiveIris technology to take their systems to the next level. Currently HomeLink is embedded in most models of the rearview mirrors that are produced by Gentex. Homelink controls the garage doors from the vehicles when they approach or leave the home. Gentex is speaking of integrating current features which are part of an app which has security features such as facial, voice, and fingerprint recognition. The future of Gentex has a road map of including keyless car operation with manufacturers such as BMW, then expanding to full home-automation that includes lighting, security, and HVAC.

As this technology develops, integrates, and becomes mainstream, IT GURUS OF ATLANTA will be there every step of the way to give insightful updates and feature concepts that are innovative to the industry.

Now who is ready for more state of the art technology? IT GURUS OF ATLANTA says the word is ready! When speaking of state of the art, we speak of the next level in technological advancements. Earlier this year we spoke about the new drones being pushed by Amazon that are being utilized to make their deliveries more efficient. Soon Amazon will be making deliveries to doorsteps of millions of homes in America once they have been approved by the government to do so. Now with even better drones being built by the hour, why not shape the way we travel while we are at it? Well a company in China is doing just that. The name of the company is called Ehang. Ehang has developed passenger drones which are capable of carrying passengers to and from locations the same way a drone would.

The prototype has been cleared for testing in the state of Nevada. With the possibility of this passenger drone being sold commercially, this is a huge move for the company to get their product in the US. Ehang’s drone is capable of carrying 1 passenger for 23 minutes in any direction the same way a drone would. The passenger drone is capable of traveling at altitudes of 11,500 feet and moving at 63 mph. The drone is completely electric and comprised of 8 propellers, 4 main arms which has upper and lower arms, can carry up to 264lbs, and has the same features of a conventional quadcopter.

Ehang partnered with the Nevada Institute for Autonomous Systems (NIAS) and the Governor’s Office of Economic Development (Goed) in order to get approval for the testing in Nevada. The goal of the company is to unleash the possibilities of aerial transportation on a commercial level. The vision of the company is for passengers to input their destination in the drone then allow the drone to complete the trip autonomously. These tests in Nevada are necessary in order for the FAA to approve the Ehang 184 for commercial use. Nevada has been noted as one of the forward-thinking states to allow the testing of autonomous vehicles on public roads such as the Google self-driving car.

The vehicle would not have any controls for the passenger to use. They would simply program their destination in their app, the vehicle would arrive to pick them up, and then take them to their destination. This would be done with very little interaction. As a backup, the company states that it would instill a remote-control center which would take over the vehicle in case any problems arise.

Being that this is cutting edge on the market, Ehang has matched the price with the demand by asking for $200,000 to $300,000 per vehicle. Even with the drone being tested currently, it still has hurdles to overcome such as being able to use the 4G wireless network in order to utilize the GPS, divert from potential hazards, and detect more efficient routes. The current concerns of the FAA is that there are no controls as well as safety concerns which the tests in Nevada will have to overcome.

With the growth of drone usage and autonomous vehicles taking over in companies such as Uber, Lyft, Google self-driving, and now the Ehang 184, it is clear that autonomous vehicles are here to stay. No longer are they a vision of the future or in sci-fi movies. The automation and sophistication of the future is now here in 2017.

Now who in the world of technology, sports, or entertainment has not heard of the huge Main Event on Saturday, August 26th, 2017. The main event which I am speaking of is McGregor Vs. Mayweather. This fight has been all over the news since the beginning of the name calling, extravagant photo scenes, press conferences, and live appearances by both boxers to show their personalities. From the young to the old which has a television, smartphone, tablet, computer, radio, or just general access to the internet has heard of this event for months in the making. McGregor coming from a Mixed Martial arts background winning multiple titles to Mayweather which is an undefeated boxing legend, both these box office smashers bring a plethora of experience from different arenas to this one history making event. These two titans are engaged to a battle which will reveal one victor.

As sports fans and entertainment enthusiasts gather to watch the event either live or via multiple broadcasting platforms, technology experts are geared up to see the possibility of the introduction of a new technology to the Main Event arena. This new technology which we speak of is called “StrikeTec”. These are wearable sensors which the boxers would wear to keep track of their movements, force, number of punches thrown, and speed. This valuable data can give accurate information which the naked eye can sometimes miss and data that can be further used for training and even gaming software.

The StrikeTec wearable sensors progress and learn along with the user. The more the user wears the devices is the more accurate the data is evaluating the performances and giving meaningful insights and reports designed to increase performance. Coaches can use the data to determine the force of a punch, the speed at which the punches are being thrown, and determine the exhaustion rate more accurately than ever. More than just using previous techniques that have been effective in the past, this data can prove that technology can blend naturally with physical improvement.

The sensors themselves are 7mm in height and 18mm in length. The sensors are attached to the user’s wristband and communicates via Bluetooth to the StrikeTec Boxing Training App. The user’s routines, and current stats are translated into real-time data which is used to determine how the coach will increase or decrease the boxer’s activities during a fight.

Between Mayweather and McGregor, it is possible to see this technology being utilized to analyze how the boxers are performing and how the coaches communicate with them. Even though unconfirmed that McGregor and Mayweather fight will be using this exact technology, the boxing and technology world is a buzz with the possibilities that technology and boxing are blending in this huge Main Event.

For the gamers amongst us, this data will surely be used in future games to be brought out by EA sports and other large gaming companies. Previously the data from fights and training of these boxers were either compiled through the naked eye in the past or through suits the boxers wear with multiple sensors to monitor movement. The data used in previous games was sometimes viewed as biased, however, with the current evolution of technology and its accuracy gamers are certain to get more accuracy with their games. This invaluable data which is currently being captured by StrikeTec is not only being transitioned from boxing, but into other sports such as Mixed Martial Arts, and Kickboxing. As the water resistant capabilities expand with this technology, swimming and other water sports are sure to follow in the line of sports that this technology will be used to monitor athletes! IT GURUS OF ATLANTA is always there at the edge of cutting edge technology and we will embrace the efforts of StrikeTec as they expand their growing reaching in the sporting arena.