Abstract

A method of validating a user for accessing a secure system comprising selecting a picture that is prompted to the user, generating, through the user, an intelligent voice print in regards to the selected picture, matching the intelligent voice print associated with the selected picture to a stored authentication voice print and picture pair, authenticating the user when the intelligent voice print is matched to within a predetermined voice tolerance, verifying a textual component of the intelligent voice print to within a predetermined textual tolerance, validating the authenticating and the verifying of the user, and receiving access to the secure system based on the validating of the user against the stored intelligent voice print and picture pair.

Description

CROSS-REFERENCE TO RELATED APPLICATION

The present application claims the benefit of priority to U.S. Provisional Patent Application No. 62/298,041, filed Feb. 22, 2016, titled “PSYCHOLBABBLE”, which is hereby incorporated by reference in its entirety.

FIELD

The present invention relates generally to validating a user and more particularly, to software applications directed to validating a user for access to a secure system.

BACKGROUND

Security is a revolving door. What may be secure today will not be secure tomorrow. With the ever-increasing skill and determination of today's hackers, personal data is more at risk than ever. Currently on the market there are various degrees of authentication methods that range from a simple login, to slightly more sophisticated means like 2-step verification methods used by Facebook and Google. This only solidifies the point that the more important the data, the more important the authentication. Banks use not only login, but try to use extra 2 or 3 step processes like picture verification, passphrase verification, and browser identification. These are very good methods; however, all can still be circumvented as there are a limited number of possible correct answers.

With the development of an essentially “un-hackable” solution, the world (and everyone's bank accounts) would be more secure than ever. Accordingly, there is a need for a secure login process that overcomes the shortcomings stated above.

SUMMARY

The present invention aims to address the above by providing an iron clad login process that removes the need for written passwords by utilizing an infinite (picture) to infinite (voice) method of authentication.

An exemplary embodiment of a method of validating a user for accessing a secure system comprises selecting a picture that is prompted to the user, generating, through the user, an intelligent voice print in regards to the selected picture, matching the intelligent voice print associated with the selected picture to a stored authentication voice print and picture pair, authenticating the user when the intelligent voice print is matched to within a predetermined voice tolerance, verifying a textual component of the intelligent voice print to within a predetermined textual tolerance, validating the authenticating and the verifying of the user, and receiving access to the secure system based on the validating of the user against the stored intelligent voice print and picture pair.

In related versions, the method further comprises entering a username and a password.

In related versions, the method further comprises generating at least one device identifier based on a device component of a device used to access the secure system.

In related versions, access is received based on a matching of the at least one device identifier to a previously stored device identifier.

In related versions, the method further comprises generating a location identifier based on a predesignated location of the user.

In related versions, access is received based on a matching of the location identifier to a previously stored location identifier.

In related versions, the method further comprises generating identification voice prints in response to stored user identification questions, and receiving access to the secure system based on biometric authentication of the identification voice prints.

An exemplary embodiment of a method of validating a user comprises prompting a user to select and describe an image, receiving a picture selection by the user, receiving an intelligent voice print from the user based on the picture selection, verifying a textual component of the intelligent voice print, authenticating the intelligent voice print, validating the user based on the verifying and authenticating, and granting access to the user based on the validating of the user.

In related versions, the method further comprises receiving a username and a password, and validating the username and the password.

In related versions, the intelligent voice print matches a previous picture and intelligent voice print pair selection that was selected and stored by the user.

In related versions, validating the textual component comprises converting the intelligent voice print to a text file and comparing the text file to a previously stored text file.

In related versions, the textual component is verified if the comparing is within a set predetermined tolerance.

In related versions, the intelligent voice print is authenticated if the comparing is within a set predetermined tolerance.

In related versions, the method further comprises generating a picture presentation.

An exemplary embodiment of an electronic device for executing a software application for validating a user for accessing a secure system comprises an input for receiving a picture selection by the user, a voice input for receiving from the user an intelligent voice print based on the picture selection, a verification component for encrypted communication with a verification server for verifying a textual component of the intelligent voice print, an authentication component for encrypted communication with an authentication server for authenticating the intelligent voice print, and a validation component for encrypted communication with a validation server for validating the user based on the authenticating and verifying of the user.

In related versions, the electronic device further comprises at least one device component identifier for use in authenticating the electronic device.

In related versions, the electronic device further comprises a location transmitter for encrypted transmission of a location of the user for use in validating a predesignated location of the user.

In related versions, the intelligent voice print is within a set predetermined time threshold.

In related versions, the electronic device is a desktop computer, a mobile device, a website, a server farm, a server, a virtual machine, a cloud server, and/or a cloud virtual machine, and the software application is a plug-in application to other software or hardware.

The contents of this summary section are provided only as a simplified introduction to the invention, and are not intended to be used to limit the scope of the appended claims. The present disclosure has been described above in terms of presently preferred embodiments so that an understanding of the present disclosure can be conveyed. However, there are other embodiments not specifically described herein for which the present disclosure is applicable. Therefore, the present disclosure should not be seen as limited to the forms shown, which should be considered illustrative rather than restrictive.

DRAWINGS

Other systems, methods, features and advantages of the present invention will be or will become apparent to one of ordinary skill in the art upon examination of the following figures and detailed descriptions. It is intended that all such additional apparatuses, systems, methods, features and advantages be included within this description, be within the scope of the present invention, and be protected by the appended claims. Component parts shown in the drawings are not necessarily to scale, and may be exaggerated to better illustrate the important features of the present invention. In the drawings, like reference numerals designate like parts throughout the different views, wherein:

FIG. 1 is a flowchart depicting an exemplary embodiment of a method for setting up a secure login.

FIG. 2 is a flowchart depicting an exemplary embodiment of a method for accessing a secure system using the secure login.

FIG. 3 is a block diagram depicting an exemplary electronic device for accessing a secure system using the secure login.

DETAILED DESCRIPTION

Various aspects are now described with reference to the drawings. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of one or more aspects. It may be evident, however, that the various aspects may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing these aspects.

As used herein, a “voice print” is defined as an intelligent verbal response by a user to a stimulus. The intelligent verbal response comprises a biometric component as well as a textual component. The biometric component is for authenticating the user by the sound of his/her voice. The textual component is for verifying the speech content of the verbal response. As used herein, “voice print” and “intelligent voice print” are used interchangeably.

Referring to FIG. 1, aspects of a method 100 that can be implemented as a software application for setting up and creating a secure login are illustrated. The method 100 may include, for example, at 102, a user creating a username and password. In certain versions, the username can be an email of the user. In related versions, the user will be prompted to repeat the password in order to verify correct spelling of the password.

After successfully entering the username and password, at 104, the user will be directed to a management page that will allow them to upload pictures that they can generate a voice print (i.e., an intelligent voice print) against. In certain versions, a maximum of three pictures can be uploaded. Alternatively, a variety of picture upload limits can be utilized.

Before the secure login can be created, at 106 the user must create a valid voice print associated to each uploaded picture. The voice print for each picture can be created by using: 1) an uploading picture browse button, 2) a delete button, 3) a create voice print button, and 4) a validate voice print button (for use after the voice print is created). These buttons can be implemented on a graphical user interface (GUI) as known in the art.

At 108, once the create voice print button is pushed, a message box will appear with the words “Press [record] and say what you see in your special picture.” Various alternatives to this message can also be used that convey the same meaning to the user. The user is required to have at least one picture and voice print recording pair.

At 110, the user will be required to read a phrase that will be recorded and used to generate a footprint key for use later in voice authentication. For example, the footprint key can be generated by conventional means, such as, but not limited to, training a biometric authentication system provided by Kivox.

In related versions, the phrase can be a poem, song, or any other stimulus that will generate a verbal response from the user. In related versions, the phrase can be of various lengths, including, but not limited to, 5 seconds, 10 seconds, 15 seconds, 30 seconds, or 45 seconds. The recording of the phrase can be automatically stopped once the desired length of time has elapsed.

At 112, the recorded phrase is sent to an authentication server, such as, but not limited to, Kivox (which can be hosted on the same server as the software application), to train a passive voice detection service on the authentication server. In related versions, prior to submitting the passive training file, the user can verify the recording to make sure it was a good recording.

In related versions, the user can delete any created picture-voice print combination, but must at least have one valid picture and voice print pair to continue.

In an alternative aspect, a user that is blind can be prompted to listen to a song, jingle, poem, phrase, or other audio signal. The blind user can then be prompted to sing, hum, whistle, or otherwise repeat the audio signal. The blind user's response can be recorded and used for authentication as described herein in accordance with the scope of the invention.

At 114, an additional layer of protection can be selected by the user in the form of a device-specific lockdown option. When the user has enabled this feature, at 116, upon user registration, the application (either thick native app or desktop app) will pull available unique device IDs and store these to restrict access to just those devices.

In the case of mobile devices (e.g., iPhones, Android smart phones, iPads, tablet computers, etc.), it is likely that just a single Device UUID will be accessible to authenticate the device with.

In the case of desktop software, available IDs will likely be Processor ID, Memory ID, Motherboard ID, and Network Card, MAC address.

When a user is setting up their account, they will be able to “Create Allowed Device” in their account. This will allow them to login and be authenticated with a device they have listed. Users can also “Remove An Allowed Device” from their list of devices.

It is understood that Unique Device ID restriction is not a foolproof method of security, as often these kinds of IDs can be spoofed. However, this is an additional layer of security that makes it that much less likely that a hacker will be able to combine their efforts to penetrate all the layers of security.

To validate the device ID information, the system will simply do a text comparison of the device ID information that is stored in the user profile with the information provided at login time by the device. In related versions, algorithmic permutations can be performed based upon the data that the devices give to the system in order to add another layer of security.

At 118, the user is given a location-specific lockdown option to add an additional layer of security based on a location of the user.

When this feature is activated, at 120, the system will obtain the GPS or WIFI location from the device (using built-in location services provided by the operating system according to methods well-known in the art) to obtain the geographic location of the device that the user is using to access the system.

Upon user setup, the user will be able to add locations they are allowed to access the system from by adding to their “Allowed Locations” list. They can also remove locations from this list.

In the case that the user is using a PC that is unable to accurately pinpoint its location (i.e., such as is the case sometimes with PCs that are running on WIFI access points that do not exist in the database of WIFI access point locations), the user may request that the system sends them a URL via text to their mobile device. The system will then send this text URL, which the user will open, and then the system will verify the user's location via their mobile web browser. This option can be done during the user account setup operation if the security configuration for that particular client allow for it (to allow for a customer's flexibility).

In related versions, the phone number where the text URLs are to be sent can be entered by the user at the time of user setup, but not during the login process. This is to enhance security in the system.

In related versions, via an administrative backend, the user can create an override for a user for a specific time period, so that they are allowed to bypass the location lockdown during that time period. This ability will be restricted to administrators for the user's account, and tied to a specific accessing user. This way, when the user is travelling, or otherwise out-of-town, they can still access the system.

At 122, registration is complete and the user is allowed into the system.

Referring to FIG. 2, aspects of a method 200 that can be implemented as a software application for validating a user for accessing a secure system are illustrated. The method 200 may include, for example, at 202, accessing the software program. The software program can be executed on a variety of platforms, including, but not limited to, a desktop computer, a mobile device, a website, a server farm, a server, a virtual machine, a cloud server, and/or a cloud virtual machine. In related versions, the software application can be a plug-in application to other software or hardware as well-known in the art.

At 204, the user is presented with a login screen comprising a link to the registration page where they will enter their username and password.

If the user fails to enter proper authentication at 206, they will be locked out of the system for a specified number of minutes at 210. In related versions, the user is given a predetermined number of attempts before the use is locked out for the specified number of minutes at 208, after which the user must start over at 202. The specified number of minutes for lockout and the predetermined number of attempts are variable, and can be set by a system administrator as the administrator sees fit.

In versions where a device restriction is in place, after the user has entered their login, the system will gather unique device ID information at 212, and check to make sure that the device they are accessing the system from is in a list of Allowed Devices at 214. If it is not, the user will be directed back to 208, where the user will be shown an error message and locked out of the system for a specified number of minutes.

In related versions, the system will validate unique devices by pulling available hardware IDs such as, including, but not limited to, device ID, processor ID, motherboard ID, and other available IDs.

Once they have passed the Device Restriction (if enabled), the system will check the Location Restriction (if enabled for this customer). At 216, the location of the device or PC the user is accessing the system through will be collected. At 218, if the location matches an Allowed Location in that corresponds to the user, they will be allowed onto the next step. In related versions, the Allowed Location can be listed on a user list.

In related versions, if the user's PC is unable to provide location information, the user can select to authenticate their location via their mobile device. This will cause a URL to be texted to the user's mobile device (i.e., a number they have entered into their setup already), which can then access the user's location from their mobile browser (i.e., the user must give permission for the web app to access this information in order for the authentication to succeed).

In related versions, if the user in question has an active Location Override in effect, they will bypass this Location Restriction entirely for the specified time period. The Location Override can be determined by the user ahead of time. For example, if the user knows that he/she will be travelling and will be attempting to login while away, the user can set the Location Override prior to travelling.

If the user fails to authenticate their location by the methods above, they will be shown an error message and locked out of the system for a specified number of minutes at 208.

Once successful, the user will be presented with a series of questions that require verbal responses from the user. In related versions, the verbal responses will have to add up to be a minimum total length of audio. For example, the verbal responses can total seven seconds of audio, or any other predetermined length of time that is determined by the system administrator. The minimum total length of audio is needed for the purposes of biometric authentication of the user's voice.

Examples of questions can include, but is not limited to, “What is your name?”, “What city were you born in?”, “What is your mother's maiden name?”, etc., and other similar questions. In related versions the questions can be worded in any way that convey the same meaning. In related versions, the questions are directed toward personal information related to the user that only the user would know.

At 220, the user is shown a picture presentation. The picture presentation can be implemented in a variety of ways. For example, in some versions, the user is shown a picture with several other randomly chosen pictures of same size. The pictures can sourced from a customized database of stock photos, or can be sourced online from databases of images in the public domain. In another version, the picture presentation can be a series of randomized pictures that are presented to the user one at a time, where the user is asked to identify each picture as familiar or unfamiliar.

At 222, the user is prompted to give a verbal response to a picture they recognize to create an intelligent voice print. In related versions, the intelligent voice print can be a predetermined length, such as a seven second wave file generated from the captured voice response. The predetermined length can be of any length sufficient for the purposes of biometric voice identification and verification of the user.

In related versions, if the intelligent voice print is not at least a predetermined length, the user will be required to repeat a canned phrase to make up the difference.

At 224, the intelligent voice print is sent off to verify the user against a voice print that was generated prior during creation of the secure login. For example, a biometric component of the intelligent voice print is matched against a biometric component of the prior generated voice print to determine if it is the same person that is talking. If this returns as a failure, the user is denied access and is sent to an access denied page/lockout page at 208.

If the voice is authenticated, then the textual content of the intelligent voice print is verified next at 226. In related versions, a third party text recognition tool as well-known in the art (e.g., Annyang!) can be used to validate that not only were they the person who said it, but the text they said matched. This adds a second layer of security that will be hard to bypass because not only does it have to be the same voice, but also the same textual content.

In related versions, the textual content can be a textual component of the intelligent voice print, and is compared against a prior saved text that was generated prior during creation of the secure login. In some versions, the textual component must match the prior saved text to within a specified tolerance that is determined by an administrator (e.g., 75%-95% accurate, or any other value) in order to be verified. In some versions the textual component must match exactly to the prior saved text.

If there is a successful text match, then at 228 the user is allowed to login and given access to sensitive data, such as, but not limited to, bank account information, etc.

If there is an unsuccessful text match, then at 208 the user is denied access and sent them to an access denied page. In related versions, in the event of an unsuccessful text match and/or unsuccessful biometric authentication of the user, the intelligent voice print is kept on file as evidence of a potential hacker and/or identity thief for the future purposes of potential criminal investigations and/or related proceedings. In these cases, the intelligent voice print is an essential sample of what the accused hacker and/or identity thief sounds like, which can be very useful as evidence moving forward to catch criminals.

In alternative versions, for the case of a blind user, the above steps can be repeated and in lieu of a visual cue, such as a picture, a sound verification can be used. The sound verification could be a song, jingle, or other audio cue as disclosed herein, and authentication of the blind user would proceed similarly to what is described herein.

In related versions, the method can be implemented in a software application, such as a mobile application, or the like, and can be a plug-in application for use on other software or hardware. For example, a web-based plug-in application, similar to the way in which reCAPTCHA (https://www.google.com/recaptcha/intro/index.html) works, the application can be designed to be “callable” or usable on any other webpage as a login layer. The way this could work is that a “Plugin Signup Website” can be created, where a person who wants to lock down their site using the secure login enters their information, and their website's information, and receives a block of HTML and javascript to paste on their page (similar to: https://www.google.com/recaptcha/admin#list). This javascript will make the application's control functions appear as a component box, and then users can implement a call on their site to the application's servers that gets called after submission to determine if the user passed or failed the application's authentication.

Accordingly, the application can be monetized by charging site owners a fee to implement this control (by restricting which sites submissions are accepted from, or more likely, requiring a private API key to be sent with the request), and charge them accordingly. It could be marketed as a simple and easy way to add the level of voice/phrase authentication being provided to any website that wants to implement it.

In related versions, the application can be implemented as a desktop plug-in. For example, in a similar way to the web-based control version described herein and above, a desktop plug-in implementing the application can be developed for use by desktop tools for logging into the software. As long as there is an Internet connection, the application will be able to communicate through the desktop whether or not the authentication passed or failed. An SAAS model for payment and processing of the authentication requests can be utilized, so it would be compatible with the web-based control option described herein.

In related versions, a mobile plug-in version for iOS and Android developers can also be implemented that can be dropped into any application. Using this approach, application developers can be charged a 15 monthly recurring fee to use the secure login service, which would add strong security to their application.

In related versions, additional data encryption can add further levels of security to the database that stores data relating to intelligent voice prints, passwords, etc.

In related versions, image sourcing of placebo images (i.e., images that are used as random images for the purposes of a picture presentation) can be sourced from public databases on the Internet of public domain images. This is advantageous over using stored stock images because the images would never be the same twice. Alternatively, an algorithm can be implemented to prevent a hacker from being able to tell which image was the “secret” image by seeing which one was repeated the most often. Such an algorithm could use code octets pursuant to the Pythagorean Theorem. For example, the code octets could be used in a unique combination to calculated C squared in a Pythagorean theorem, where that solution number would be encrypted and placed inside the desktop or device for a later security check upon user authentication and decryption. Additionally, “code signing” and “time stamping” can be utilized to protect the code and alert the company of software changes by malicious code which will filter to an alert.

Third party resources that can be used to implement various aspects of the methods described herein can include Agnitio KIVOX (http://www.agnitio-corp.com/) for biometric voice authentication, CMU Sphinx for text recognition (http://cmusphinx.sourceforge.net/), Annyang! for speech recognition software (https://www.talater.com/annyang/), and RecorderJs for microphone in a browser (https://github.com/mattdiamond/Recorderjs). It is understood that these disclosed third party resources are listed by examples only, and are not meant to be exclusive. Other similar third party resources for similar functions can also be implemented without departure from the spirit of the disclosure herein.

As can be seen from the description herein, the combining of a unique verbal phrase to a picture match achieves a psychological security lock that is virtually impossible to hack.

FIG. 3 is a conceptual block diagram illustrating components of an apparatus or system 300 for accessing a secure system using a secure login. The apparatus or system 300 may include additional or more detailed components as described herein. As depicted, the apparatus or system 300 may include functional blocks that can represent functions implemented by a processor, software, or combination thereof (e.g., firmware).

As illustrated in FIG. 3, the apparatus or system 300 may comprise at least one input 302 for receiving input from a user. The component 302 may be, or may include, a means for receiving input from the user. Said means may include the processor 310 coupled to the memory 316, and to the network interface 314, or other hardware, the processor executing an algorithm based on program instructions stored in the memory. Such algorithm may include a sequence of more detailed operations, for example, receiving a picture selection and a voice input from the user as described above in relation to FIGS. 1 and 2. In some versions, the electrical component 302 can be a microphone, keyboard, mouse, camera, or other input component known in the art.

The apparatus 300 may optionally include a processor module 310 having at least one processor. The processor 310, may be in operative communication with the other components via a bus 312 or similar communication coupling. The processor 310 may effect initiation and execution of the processes or functions performed by the electrical components as described above in relation to FIGS. 1 and 2.

In related aspects, the apparatus 300 may include a network interface module 304 operable for communicating with a verification server, an authentication server, and/or a validation server over a computer network. The network interface module 304 can comprise a verification component, an authentication component, and a validation component. In further related aspects, the apparatus 300 may optionally include a module for storing information, such as, for example, a memory device/module 316. The computer readable medium or the memory module 316 may be operatively coupled to the other components of the apparatus 300 via the bus 312 or the like. The memory module 316 may be adapted to store computer readable instructions and data for effecting the processes and behavior of the modules, and subcomponents thereof, or the processor 310, or the methods 100 or 200 and one or more of the additional operations as disclosed herein. The memory module 316 may retain instructions for executing functions associated with the modules. While shown as being external to the memory 316, it is to be understood that the modules can exist within the memory 316.

In view of the exemplary systems described herein, methodologies that may be implemented in accordance with the disclosed subject matter have been described with reference to several flow diagrams. While for purposes of simplicity of explanation, the methodologies are shown and described as a series of blocks, it is to be understood and appreciated that the claimed subject matter is not limited by the order of the blocks, as some blocks may occur in different orders and/or concurrently with other blocks from what is depicted and described herein. Moreover, not all illustrated blocks may be required to implement the methodologies described herein. Additionally, it should be further appreciated that the methodologies disclosed herein are capable of being stored on an article of manufacture to facilitate transporting and transferring such methodologies to computers, or as a plug-in application to other software or hardware as well-known in the art.

Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.

As used in this application, the terms “component”, “module”, “system”, and the like are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.

Various aspects have been presented in terms of systems that may include a number of components, modules, and the like. It is to be understood and appreciated that the various systems may include additional components, modules, etc. and/or may not include all of the components, modules, etc. discussed in connection with the figures. A combination of these approaches may also be used. Certain aspects disclosed herein may be performed using computing devices including devices that utilize touch screen display technologies and/or mouse-and-keyboard type interfaces. Examples of such devices include computers (desktop and mobile), smart phones, personal digital assistants (PDAs), and other electronic devices both wired and wireless.

In addition, the various illustrative logical blocks, modules, and circuits described in connection with certain aspects disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, system-on-a-chip, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.

Operational aspects disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, a DVD disk, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC or may reside as discrete components in another device.

Furthermore, the one or more versions may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed aspects. Non-transitory computer readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips . . . ), optical disks (e.g., compact disk (CD), digital versatile disk (DVD) . . . ), smart cards, and flash memory devices (e.g., card, stick). Those skilled in the art will recognize many modifications may be made to this configuration without departing from the scope of the disclosed aspects.

The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

The various embodiments can be implemented in a wide variety of operating environments, which in some cases can include one or more user computers, computing devices, or processing devices which can be used to operate any of a number of applications. User or client devices can include any of a number of general purpose personal computers, such as desktop or laptop computers running a standard operating system, as well as cellular, wireless, and handheld devices running mobile software and capable of supporting a number of networking and messaging protocols. Such a system also can include a number of workstations running any of a variety of commercially-available operating systems and other known applications for purposes such as development and database management. These devices also can include other electronic devices, such as dummy terminals, thin-clients, gaming systems, and other devices capable of communicating via a network.

Various aspects also can be implemented as part of at least one service or Web service, such as may be part of a service-oriented architecture. Services such as Web services can communicate using any appropriate type of messaging, such as by using messages in extensible markup language (XML) format and exchanged using an appropriate protocol such as SOAP (derived from the “Simple Object Access Protocol”). Processes provided or executed by such services can be written in any appropriate language, such as the Web Services Description Language (WSDL). Using a language such as WSDL allows for functionality such as the automated generation of client-side code in various SOAP frameworks.

Most embodiments utilize at least one network that would be familiar to those skilled in the art for supporting communications using any of a variety of commercially-available protocols, such as TCP/IP, OSI, FTP, UPnP, NFS, and CIFS. The network can be, for example, a local area network, a wide-area network, a virtual private network, the Internet, an intranet, an extranet, a public switched telephone network, an infrared network, a wireless network, and any combination thereof.

In embodiments utilizing a Web server, the Web server can run any of a variety of server or mid-tier applications, including HTTP servers, FTP servers, CGI servers, data servers, Java servers, and business map servers. The server(s) also may be capable of executing programs or scripts in response requests from user devices, such as by executing one or more Web applications that may be implemented as one or more scripts or programs written in any programming language, such as Java®, C, C# or C++, or any scripting language, such as Perl, Python, or TCL, as well as combinations thereof. The server(s) may also include database servers, including without limitation those commercially available from Oracle®, Microsoft®, Sybase®, and IBM®.

The environment can include a variety of data stores and other memory and storage media as discussed above. These can reside in a variety of locations, such as on a storage medium local to (and/or resident in) one or more of the computers or remote from any or all of the computers across the network. In a particular set of embodiments, the information may reside in a storage-area network (“SAN”) familiar to those skilled in the art. Similarly, any necessary files for performing the functions attributed to the computers, servers, or other network devices may be stored locally and/or remotely, as appropriate. Where a system includes computerized devices, each such device can include hardware elements that may be electrically coupled via a bus, the elements including, for example, at least one central processing unit (CPU), at least one input device (e.g., a mouse, keyboard, controller, touch screen, or keypad), and at least one output device (e.g., a display device, printer, or speaker). Such a system may also include one or more storage devices, such as disk drives, optical storage devices, and solid-state storage devices such as random access memory (“RAM”) or read-only memory (“ROM”), as well as removable media devices, memory cards, flash cards, etc.

Such devices also can include a computer-readable storage media reader, a communications device (e.g., a modem, a network card (wireless or wired), an infrared communication device, etc.), and working memory as described above. The computer-readable storage media reader can be connected with, or configured to receive, a computer-readable storage medium, representing remote, local, fixed, and/or removable storage devices as well as storage media for temporarily and/or more permanently containing, storing, transmitting, and retrieving computer-readable information. The system and various devices also typically will include a number of software applications, modules, services, or other elements located within at least one working memory device, including an operating system and application programs, such as a client application or Web browser. It should be appreciated that alternate embodiments may have numerous variations from that described above. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, software (including portable software, such as applets), or both. Further, connection to other computing devices such as network input/output devices may be employed.

Storage media and computer readable media for containing code, or portions of code, can include any appropriate media known or used in the art, including storage media and communication media, such as but not limited to volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information such as computer readable instructions, data structures, program modules, or other data, including RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the a system device. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the various embodiments.

The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications, combinations, and changes may be made thereunto without departing from the broader spirit and scope of the disclosure as set forth in the claims.

The description of the subject technology is provided to enable any person skilled in the art to practice the various embodiments described herein. While the subject technology has been particularly described with reference to the various figures and embodiments, it should be understood that these are for illustration purposes only and should not be taken as limiting the scope of the subject technology.

There may be many other ways to implement the subject technology. Various functions and elements described herein may be partitioned differently from those shown without departing from the scope of the subject technology. Various modifications to these embodiments will be readily apparent to those skilled in the art, and generic principles defined herein may be applied to other embodiments. Thus, many changes and modifications may be made to the subject technology, by one having ordinary skill in the art, without departing from the scope of the subject technology.

A reference to an element in the singular is not intended to mean “one and only one” unless specifically stated, but rather “one or more.” The term “some” refers to one or more. Underlined and/or italicized headings and subheadings are used for convenience only, do not limit the subject technology, and are not referred to in connection with the interpretation of the description of the subject technology. All structural and functional equivalents to the elements of the various embodiments described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and intended to be encompassed by the subject technology. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the above description.

Exemplary embodiments of the invention have been disclosed in an illustrative style. Accordingly, the terminology employed throughout should be read in a non-limiting manner. Although minor modifications to the teachings herein will occur to those well versed in the art, it shall be understood that what is intended to be circumscribed within the scope of the patent warranted hereon are all such embodiments that reasonably fall within the scope of the advancement to the art hereby contributed, and that that scope shall not be restricted.

Claims (20)

What is claimed is:

1. A method of validating a specific user for accessing a secure system comprising:

receiving, into a device, a picture that is prompted to the user from among a plurality of pictures, as a selected picture;

receiving an intelligent voice print in response to the selected picture, where text of the intelligent voice print represents a unique verbal response by the user to the selected picture defined by a relationship between the picture and the unique verbal response by the user;

matching the intelligent voice print associated with the selected picture to a stored authentication voice print and picture pair, where the intelligent voice print and picture pair includes both a biometric voice print along with textual information from the unique verbal response;

authenticating the user as being the specific user when the intelligent voice print is biometrically matched to within a predetermined voice tolerance to the authentication voice print;

verifying a textual component of the intelligent voice print to within a predetermined textual tolerance to the unique verbal response to the selected picture;

validating the authenticating and the verifying of the specific user; and

receiving access to the secure system based on the validating of the user as being the specific user by comparing against the stored intelligent voice print and picture pair.

2. The method of claim 1 wherein the selecting comprises providing a picture presentation to the user, including a series of randomized pictures, and one picture which is recognized by the user, and where the selected picture is the one picture which is recognized by the user, and where the selection comprises another means of verification of identification of the user.

3. The method of claim 1 wherein the matching comprises verifying the order and/or combination of text, and the relationship between the picture and the unique verbal response by the user which comprises a picture and textual relationship used to access the secure system.

4. The method of claim 1, further comprising generating at least one device identifier based on a device component of a device used to access the secure system, wherein access is received based on a matching of the at least one device identifier to a previously stored device identifier.

5. The method of claim 1 further comprising generating a location identifier based on a predesignated location of the user and access is received based on a matching of the location identifier to a previously stored location identifier.

6. The method of claim 1 wherein the matching comprises determining a word and/or multiple words in any order for textual tolerance.

receiving access to the secure system based on biometric authentication of the identification voice prints.

8. A method of validating a user comprising:

prompting a user to select a picture from among a plurality of pictures, as a selected picture and to describe the picture;

receiving a picture selection by the user;

receiving an intelligent voice print from the user based on the picture selection, where text of the intelligent voice print represents a unique verbal response by the user to the selected picture defined by a relationship between the picture and the unique verbal response by the user;

verifying a textual component of the intelligent voice print relative to textual information from the unique verbal response;

authenticating the intelligent voice print using biometric information gathered from the user;

validating the user based on the verifying and authenticating; and

granting access to the user based on the validating of the user.

9. The method of claim 8 wherein the prompting further comprises providing a picture presentation to the user, including a series of randomized pictures, and one picture which is recognized by the user, and where the selected picture is the one picture which is recognized by the user, and where the selection comprises another means of verification of identification of the user.

10. The method of claim 8 wherein the intelligent voice print matches a previous picture and intelligent voice print pair selection that was selected and stored by the user.

11. The method of claim 8 wherein validating the textual component comprises converting the intelligent voice print to a text file and comparing the text file to a previously stored text file by verifying the order and/or combination of text, and the relationship between the picture and the unique verbal response by the user includes a combined relationship resulting in text.

12. The method of claim 11 wherein the textual component is verified if the comparing is within a preset but configurable predetermined tolerance.

14. The method of claim 13 wherein the intelligent voice print is authenticated if the comparing is within a preset but configurable predetermined tolerance.

15. The method of claim 8 further comprising generating a picture presentation comprises providing the picture presentation to the user, including a series of randomized pictures, and one picture which is recognized by the user.

16. An electronic device for executing a software application for validating a user for accessing a secure system, the electronic device comprising:

an input for receiving a picture selection by the user from among a plurality of pictures, as a selected picture;

a voice input for receiving from the user an intelligent voice print based on the picture selection, where text of the intelligent voice print represents a unique verbal response by the user to the selected picture defined by a relationship between the picture and the unique verbal response by the user;

a verification component for encrypted communication with a verification server for verifying a textual component of the intelligent voice print;

an authentication component for encrypted communication with an authentication server for authenticating the intelligent voice print, where the intelligent voice print and picture pair includes both a biometric voice print and also textual information from the unique verbal response; and

a validation component for encrypted communication with a validation server for validating the user based on both, the intelligent voice print being biometrically matched to within a predetermined voice tolerance to the authentication voice print, and also based on verifying a textual component of the intelligent voice print to within a preset but configurable textual tolerance to the unique verbal response to the selected picture.

17. The electronic device of claim 16 wherein the input comprises a selected picture from among a picture presentation to the user, including a series of randomized pictures, and one picture which is recognized by the user, and where the selected picture is the one picture which is recognized by the user, and where the selection comprises another means of verification of identification of the user.

18. The electronic device of claim 16 wherein the electronic device comprises a location transmitter for encrypted transmission of a location of the user for use in validating a predesignated location of the user, and further comprising a device that generates at least one device identifier based on a device component of a device used to access the secure system.

19. The electronic device of claim 16 wherein the intelligent voice print is within a preset but configurable time threshold.

20. The electronic device of claim 16 wherein the electronic device is a desktop computer, a mobile device, a website, a server farm, a server, a virtual machine, a cloud server, and/or a cloud virtual machine, and the software application is a plug-in application to other software or hardware.

US150755162016-02-222016-03-21Device and method for validating a user using an intelligent voice print
Active2036-10-16US10044710B2
(en)

Decoding systems with a decoding engine running on a mobile device and coupled to a payment system that includes identifying information of second parties qualified to conduct business with the payment system

System and method for using global location information, 2D and 3D mapping, social media, and user behavior and information for a consumer feedback social media analytics platform for providing analytic measurements data of online consumer feedback for global brand products or services of past, present or future customers, users, and/or target markets

Decoding systems with a decoding engine running on a mobile device and coupled to a payment system that includes identifying information of second parties qualified to conduct business with the payment system

System and method for using global location information, 2D and 3D mapping, social media, and user behavior and information for a consumer feedback social media analytics platform for providing analytic measurements data of online consumer feedback for global brand products or services of past, present or future customers, users, and/or target markets