Categories

Meta

Month: February 2018

If the gauge of an IT leader is the business value he or she delivers to the organization, Stephen Tame hasn’t done badly. During his stint at IndiGo he has exploited a generation of competitive technologies like big data, analytics, IoT and mobility to keep IndiGo soaring high well into the future.

Tame landed at IndiGo in the summer of 2014. As the Chief Advisor IT & Chief Digital Officer, he had his work cut out for him: implement IT to catalyze business advantage and chart strategic direction. But that was not nearly enough.

As the digital custodian of the largest domestic low-cost carrier, he had to embark on a multidimensional effort to reengineer its core business applications to help digital permeate through them. Net result: creating business value through digital initiatives.

And Tame was undeniably the right man or the job. He had logged miles of experience in the airline industry, including a decade long stint with Jetstar Airways as its CIO, Head of Group Information Technology. This breadth of experience helps him weigh business objectives and apply innovative solutions to realize them.

That’s no mean feat. IndiGo needs innovative technologies to keep up with its fast pace of growth. Since its inception in 2006, IndiGo has soared with speed. With a market share of 39.6 per cent, IndiGo has emerged as India’s fastest-growing carrier and the largest domestic passenger airline. It operates in 49 destinations including 8 international destinations and has a fleet of 155 aircraft.

Consider some figures. The low cost airline reported its 9th consecutive year of profitable operations in 2017. It has doubled its domestic market share in last 5 years from 20.3% in FY12 to 39.6% in FY17. The company witnessed a growth of 31.5% in its passengers during 2017, and a growth in capacity of 27.5%. The total revenue increased by 16.3%.

The scale of its ambitions can be gauged by the fact that IndiGo’s current throughput is close to 1,000 flights per day across 49 destinations (41 domestic and 8 international) carrying an estimated 4.4 million passengers every month. That explains why it is the largest domestic passenger airline and the fourth largest low-cost carrier in the world.

Technology is indeed IndiGo’s passport to profitability. In an interview with ETCIO.COM, Stephen Tame, Chief Advisor-IT & Chief Digital Officer, IndiGo reveals the digital route map that will help the low cost airline outfly its competitors.

Can you talk to us about the various ways in which you have shaped the digital strategy at IndiGo?

I work with business functions to see how we can execute IT capabilities, digital dexterity and the various processes for the website, mobile apps and digital marketing and operations.

There are two parts to digital transformation- external which is customer facing, digital marketing, customer outcome and internal digital transformation which is around employee productivity and operational efficiency.

One of the first activities we completed at IndiGo was the restructuring of the IT service delivery model. We sought out and engaged business partners as part of a sourcing strategy. It wasn’t outsourcing as a goal; it was building capability, quality of service delivery and scale that we needed to ensure we could deliver the business outcomes. We implemented an SMO, Service Management Office, to focus on how we can continuously improve our service delivery. Lastly internally we then focused in building the business technology services functions, to better engage and deliver to our business areas.

Over past two years, we have been building and delivering our digital marketing programs, based primarily on the Adobe digital marketing suite of tools. However, technology is only the enabler of any digital program; the true value is in developing the culture, teams and capability to be able to have the meaningful digital conversation with the customer. Once we have the tools and team we then need to be able to deliver this to our commercial, sales and customer services businesses The digital team needs to take on an additional responsibility to actually show the outcomes, to demonstrate the value and to educate the traditional non digital parts of the organisation on what can be done, on what is now possible. Lastly, though very important for the digital team is to agree to KPIs; sign up for the accountability and prove that meaningful digital conversation with customers is a profitable business activity.

To develop the roadmap for what our website and mobile application should look like, and what functions we should build we started with the traditional ideation process. We ran internal innovation workshops with a broad spectrum of IndiGo business team members we then surveyed five thousand customers and asked them; we got lot’s good ideas, though we remained concerned we were still limiting our thinking.

We asked our community, with our 6Eappsters program, we engaged the best and brightest minds across India. Offering at the time 5 prizes, first prize 100 IndiGo tickets anywhere to anywhere across India, we received amazing community inputs to drive our innovation programs. This is a strategy adopted today by many business, generally though they run what is commonly called hackathons, targeting developers; we were more interested the innovation ideas rather than development, so 6Eappsters was open to all community members with a great idea. In digital marketing content is king, to be able to deliver content, blogs, photo’s, videos to our social channels Facebook , Twitter, Instagram, YouTube etc.. We need content, not advertisements, nor polished corporate content either; we needs culturally aligned community content.

Through our 6Eexplorer program, we looked for an explorer for every state and territory across India and then sent them on four-six journeys every year so that they can explore their state and other IndiGo destination to be able tell stories of their travel adventures in words, pictures and video that we can share with our customers. This year we have 30 6Eexplorers active across India generating content and telling great travel stories.

For a low-cost airline, it’s imperative to keep costs to an absolute minimum. How do you leverage big data analytics to optimize operational cost and improve fuel efficiency?

IndiGo enjoys one of the lowest costs compared with other public-listed LCCs globally. This has enabled IndiGo to emerge among the most profitable low-cost carriers in the world.

IndiGo’s consistent profitability is based around a disciplined execution of the low-cost carrier model, which has proven to be the most successful airline business model globally. The core tenet of this model is to have a very efficient cost structure. Being a low-cost carrier committed to delivering low fares, it is essential that we keep a watchful eye on our costs.

An aircraft is profitable as long as it stays in the sky and aviation fuel is the most expensive variable to be managed. IndiGo analyzes fuel efficiency on ground, and detailed cost analytics at every step of its journey.

Modern aircraft, very much like modern cars have the best of today’s technology, we receive real-time inflight information from all the aircraft through ACARS (Aircraft Communications and Reporting Systems); we can also send messages up to the aircraft. This data drives our aircraft position reporting and also supplies monitoring and reporting for engineering.

When the aircraft lands there are additional technologies that connect to 4G and sends all the aircraft flight data monitoring and flight data analysis data.

This big data, and analytics helps us to understand the aircraft tracking, winds, altitude, temperature and fuel burn for each engine every second throughout the aircraft journey. Like a formulae one car, this information helps tune the engines and tracking to get that extra 1% to win the race; or in the case of an airline to save that 1% in fuel. Given the cost of fuel 1% is worth 10 of millions of dollar to our bottom line.

We are leveraging data for On Time performance. We have invested heavily in technology to capture various stages of a flight, such that our On-Time performance is recorded and reported electronically without any manual intervention. Therefore, when we report On Time and are recognised as the leader, ours is not just another tall claim but a fact rooted in data.

You know what they say- Efficiency is doing things right; effectiveness is doing the right things. How are you leveraging digital initiatives to bring effectiveness into your process operations?

We are using digital programs in working with aircraft, leveraging IoT predictive analytics to collect data from the aircraft for predictive maintenance. In conjunction with the engine and aircraft manufacturers we are using all the aircraft data to report maintenance events and now with machine learning and some AI algorithms we can look into this data to predict possible maintenance events. If we can identify activities needed before these cause a problem then we can reduce impact to our aircraft delays and improve service to our customers.

IndiGo was the first India airline to introduce Electronic Flight Bags (Ipads / Digital Technologies) onto our Aircraft. We are leveraging this to give digital tools to our pilots.

All this is helping us to drive better operational outcomes with our airports and engineering teams, in the future we would extend digital processes into the busses, fuel trucks and catering trucks, creating a true digital airport for our operations teams. This is not new; a number of airlines and airports have implemented these solutions some 5-7 years ago. It only recently where we have had the 4G coverage available across India to consider these programs, it now presents a big opportunity for digital day of operations for Indian airports and airlines.

How are you boosting agility through mobile initiatives?

We are moving website and mobile applications into the digital space. We redid our website and mobile app. We are promoting a mobile first approach. Over 50 percent of our traffic is coming from mobile so we need to make sure that we provide that level of integration. So the next step is to start thinking about the next level of evolution of the mobile app. We are taking a mobile first approach in customer communication.

Most businesses take their website designed for PC’s laptop and try to squeeze these into mobile however mobiles are a completely difference user experience, smaller screens, limited keyboard capability, driven by fingers and touchscreens it is a complete rethink and redesign to deliver a good customer experience on mobile. Mobile device is connected to a lot of social channels.

Personalization in a multi channel strategy for improving customer experience and customer journey across channels. How are you driving personalization?

Airline’s operate both in the physical and digital space, unlike pure ecommerce technology companies we actually get to meet our customers as they board one of our aircraft, it’s a personal experience. We take this personal experience into the digital space we know our customers; where they like to travel too, whether this is generally leisure or business, and our customer’s preferences of what services they wish to buy and where they wish to sit on the aircraft. Knowing all this we personalise our digital conversations with our customers, it how we achieve our objective to make all this “hassle free”.

The next step in making travel “hassel free” for our mobile customers is to send their boarding passes via whatspapp or facebook messenger, if this is our customers personal preference.

What are some of the key technologies you’re experimenting with?

We are looking at AI/ Chatbots to further respond immediately to customer’s enquiries and questions. The technologies and processes are maturing and we will seek to implement these where they make sense and also we believe we can generate a more positive customer experience and outcome.

You could say HR is going “agile lite,” applying the general principles without adopting all the tools and protocols from the tech world. It’s a move away from a rules- and planning-based approach toward a simpler and faster model driven by feedback from participants. This new paradigm has really taken off in the area of performance management. (In a 2017 Deloitte survey, 79% of global executives rated agile performance management as a high organizational priority.) But other HR processes are starting to change too.

In many companies that’s happening gradually, almost organically, as a spillover from IT, where more than 90% of organizations already use agile practices. At the Bank of Montreal (BMO), for example, the shift began as tech employees joined cross-functional product-development teams to make the bank more customer focused. The business side has learned agile principles from IT colleagues, and IT has learned about customer needs from the business. One result is that BMO now thinks about performance management in terms of teams, not just individuals. Elsewhere the move to agile HR has been faster and more deliberate. GE is a prime example. Seen for many years as a paragon of management through control systems, it switched to FastWorks, a lean approach that cuts back on top-down financial controls and empowers teams to manage projects as needs evolve.

The changes in HR have been a long time coming. After World War II, when manufacturing dominated the industrial landscape, planning was at the heart of human resources: Companies recruited lifers, gave them rotational assignments to support their development, groomed them years in advance to take on bigger and bigger roles, and tied their raises directly to each incremental move up the ladder. The bureaucracy was the point: Organizations wanted their talent practices to be rules-based and internally consistent so that they could reliably meet five-year (and sometimes 15-year) plans. That made sense. Every other aspect of companies, from core businesses to administrative functions, took the long view in their goal setting, budgeting, and operations. HR reflected and supported what they were doing.

By the 1990s, as business became less predictable and companies needed to acquire new skills fast, that traditional approach began to bend—but it didn’t quite break. Lateral hiring from the outside—to get more flexibility—replaced a good deal of the internal development and promotions. “Broadband” compensation gave managers greater latitude to reward people for growth and achievement within roles. For the most part, though, the old model persisted. Like other functions, HR was still built around the long term. Workforce and succession planning carried on, even though changes in the economy and in the business often rendered those plans irrelevant. Annual appraisals continued, despite almost universal dissatisfaction with them.

Now we’re seeing a more sweeping transformation. Why is this the moment for it? Because rapid innovation has become a strategic imperative for most companies, not just a subset. To get it, businesses have looked to Silicon Valley and to software companies in particular, emulating their agile practices for managing projects. So top-down planning models are giving way to nimbler, user-driven methods that are better suited for adapting in the near term, such as rapid prototyping, iterative feedback, team-based decisions, and task-centered “sprints.” As BMO’s chief transformation officer, Lynn Roger, puts it, “Speed is the new business currency.”

With the business justification for the old HR systems gone and the agile playbook available to copy, people management is finally getting its long-awaited overhaul too. In this article we’ll illustrate some of the profound changes companies are making in their talent practices and describe the challenges they face in their transition to agile HR.

Where We’re Seeing the Biggest Changes

Because HR touches every aspect—and every employee—of an organization, its agile transformation may be even more extensive (and more difficult) than the changes in other functions. Companies are redesigning their talent practices in the following areas:

Performance appraisals.

When businesses adopted agile methods in their core operations, they dropped the charade of trying to plan a year or more in advance how projects would go and when they would end. So in many cases the first traditional HR practice to go was the annual performance review, along with employee goals that “cascaded” down from business and unit objectives each year. As individuals worked on shorter-term projects of various lengths, often run by different leaders and organized around teams, the notion that performance feedback would come once a year, from one boss, made little sense. They needed more of it, more often, from more people.

An early-days CEB survey suggested that people actually got less feedback and support when their employers dropped annual reviews. However, that’s because many companies put nothing in their place. Managers felt no pressing need to adopt a new feedback model and shifted their attention to other priorities. But dropping appraisals without a plan to fill the void was of course a recipe for failure.

Since learning that hard lesson, many organizations have switched to frequent performance assessments, often conducted project by project. This change has spread to a number of industries, including retail (Gap), big pharma (Pfizer), insurance (Cigna), investing (OppenheimerFunds), consumer products (P&G), and accounting (all Big Four firms). It is most famous at GE, across the firm’s range of businesses, and at IBM. Overall, the focus is on delivering more-immediate feedback throughout the year so that teams can become nimbler, “course-correct” mistakes, improve performance, and learn through iteration—all key agile principles.

In user-centered fashion, managers and employees have had a hand in shaping, testing, and refining new processes. For instance, Johnson & Johnson offered its businesses the chance to participate in an experiment: They could try out a new continual-feedback process, using a customized app with which employees, peers, and bosses could exchange comments in real time.

The new process was an attempt to move away from J&J’s event-driven “five conversations” framework (which focused on goal setting, career discussion, a midyear performance review, a year-end appraisal, and a compensation review) and toward a model of ongoing dialogue. Those who tried it were asked to share how well everything worked, what the bugs were, and so on. The experiment lasted three months. At first only 20% of the managers in the pilot actively participated. The inertia from prior years of annual appraisals was hard to overcome. But then the company used training to show managers what good feedback could look like and designated “change champions” to model the desired behaviors on their teams. By the end of the three months, 46% of managers in the pilot group had joined in, exchanging 3,000 pieces of feedback.

Regeneron Pharmaceuticals, a fast-growing biotech company, is going even further with its appraisals overhaul. Michelle Weitzman-Garcia, Regeneron’s head of workforce development, argued that the performance of the scientists working on drug development, the product supply group, the field sales force, and the corporate functions should not be measured on the same cycle or in the same way. She observed that these employee groups needed varying feedback and that they even operated on different calendars.

Why Intuit’s Transition to Agile Almost Stalled Out

The financial services division at Intuit began shifting to agile in 2009—but four years went by before that became standard operating procedure across the company.

What took so long? Leaders started with a “waterfall” approach to change management, because that’s what they knew best. It didn’t work. Spotty support from middle management, part-time commitments to the team leading the transformation, scarce administrative resources, and an extended planning cycle all put a big drag on the rollout.

Before agile could gain traction throughout the organization, the transition team needed to take an agile approach to becoming agile and managing the change. Looking back, Joumana Youssef, one of Intuit’s strategic-change leaders, identifies several critical discoveries that changed the course—and the speed—of the transformation:

Focus on early adopters. Don’t waste time trying to convert naysayers.

Form “triple-S” (small, stable, self-managed) teams, give them ownership of their work, and hold them accountable for their commitments.

Quickly train leaders at all levels in agile methods. Agile teams need to be fully supported to self-manage.

Expect that changing frontline and middle management will be hard, because people in those roles need time to acclimate to “servant leadership,” which is primarily about coaching and supporting employees rather than monitoring them.

Stay the course. Even though agile change is faster than a waterfall approach, shifting your organization’s mindset takes persistence.

So the company created four distinct appraisal processes, tailored to the various groups’ needs. The research scientists and postdocs, for example, crave metrics and are keen on assessing competencies, so they meet with managers twice a year for competency evaluations and milestones reviews. Customer-facing groups include feedback from clients and customers in their assessments. Although having to manage four separate processes adds complexity, they all reinforce the new norm of continual feedback. And Weitzman-Garcia says the benefits to the organization far outweigh the costs to HR.

Coaching.

The companies that most effectively adopt agile talent practices invest in sharpening managers’ coaching skills. Supervisors at Cigna go through “coach” training designed for busy managers: It’s broken into weekly 90-minute videos that can be viewed as people have time. The supervisors also engage in learning sessions, which, like “learning sprints” in agile project management, are brief and spread out to allow individuals to reflect and test-drive new skills on the job. Peer-to-peer feedback is incorporated in Cigna’s manager training too: Colleagues form learning cohorts to share ideas and tactics. They’re having the kinds of conversations companies want supervisors to have with their direct reports, but they feel freer to share mistakes with one another, without the fear of “evaluation” hanging over their heads.

DigitalOcean, a New York–based start-up focused on software as a service (SaaS) infrastructure, engages a full-time professional coach on-site to help all managers give better feedback to employees and, more broadly, to develop internal coaching capabilities. The idea is that once one experiences good coaching, one becomes a better coach. Not everyone is expected to become a great coach—those in the company who prefer coding to coaching can advance along a technical career track—but coaching skills are considered central to a managerial career.

P&G, too, is intent on making managers better coaches. That’s part of a larger effort to rebuild training and development for supervisors and enhance their role in the organization. By simplifying the performance review process, separating evaluation from development discussions, and eliminating talent calibration sessions (the arbitrary horse trading between supervisors that often comes with a subjective and politicized ranking model), P&G has freed up a lot of time to devote to employees’ growth. But getting supervisors to move from judging employees to coaching them in their day-to-day work has been a challenge in P&G’s tradition-rich culture. So the company has invested heavily in training supervisors on topics such as how to establish employees’ priorities and goals, how to provide feedback about contributions, and how to align employees’ career aspirations with business needs and learning and development plans. The bet is that building employees’ capabilities and relationships with supervisors will increase engagement and therefore help the company innovate and move faster. Even though the jury is still out on the companywide culture shift, P&G is already reporting improvements in these areas, at all levels of management.

Teams.

Traditional HR focused on individuals—their goals, their performance, their needs. But now that so many companies are organizing their work project by project, their management and talent systems are becoming more team focused. Groups are creating, executing, and revising their goals and tasks with scrums—at the team level, in the moment, to adapt quickly to new information as it comes in. (“Scrum” may be the best-known term in the agile lexicon. It comes from rugby, where players pack tightly together to restart play.) They are also taking it upon themselves to track their own progress, identify obstacles, assess their leadership, and generate insights about how to improve performance.

In that context, organizations must learn to contend with:

Multidirectional feedback. Peer feedback is essential to course corrections and employee development in an agile environment, because team members know better than anyone else what each person is contributing. It’s rarely a formal process, and comments are generally directed to the employee, not the supervisor. That keeps input constructive and prevents the undermining of colleagues that sometimes occurs in hypercompetitive workplaces.

But some executives believe that peer feedback should have an impact on performance evaluations. Diane Gherson, IBM’s head of HR, explains that “the relationships between managers and employees change in the context of a network [the collection of projects across which employees work].” Because an agile environment makes it practically impossible to “monitor” performance in the old sense, managers at IBM solicit input from others to help them identify and address issues early on. Unless it’s sensitive, that input is shared in the team’s daily stand-up meetings and captured in an app. Employees may choose whether to include managers and others in their comments to peers. The risk of cutthroat behavior is mitigated by the fact that peer comments to the supervisor also go to the team. Anyone trying to undercut colleagues will be exposed.

In agile organizations, “upward” feedback from employees to team leaders and supervisors is highly valued too. The Mitre Corporation’s not-for-profit research centers have taken steps to encourage it, but they’re finding that this requires concentrated effort. They started with periodic confidential employee surveys and focus groups to discover which issues people wanted to discuss with their managers. HR then distilled that data for supervisors to inform their conversations with direct reports. However, employees were initially hesitant to provide upward feedback—even though it was anonymous and was used for development purposes only—because they weren’t accustomed to voicing their thoughts about what management was doing.

Mitre also learned that the most critical factor in getting subordinates to be candid was having managers explicitly say that they wanted and appreciated comments. Otherwise people might worry, reasonably, that their leaders weren’t really open to feedback and ready to apply it. As with any employee survey, soliciting upward feedback and not acting on it has a diminishing effect on participation; it erodes the hard-earned trust between employees and their managers. When Mitre’s new performance-management and feedback process began, the CEO acknowledged that the research centers would need to iterate and make improvements. A revised system for upward feedback will roll out this year.

Because feedback flows in all directions on teams, many companies use technology to manage the sheer volume of it. Apps allow supervisors, coworkers, and clients to give one another immediate feedback from wherever they are. Crucially, supervisors can download all the comments later on, when it’s time to do evaluations. In some apps, employees and supervisors can score progress on goals; at least one helps managers analyze conversations on project management platforms like Slack to provide feedback on collaboration. Cisco uses proprietary technology to collect weekly raw data, or “breadcrumbs,” from employees about their peers’ performance. Such tools enable managers to see fluctuations in individual performance over time, even within teams. The apps don’t provide an official record of performance, of course, and employees may want to discuss problems face-to-face to avoid having them recorded in a file that can be downloaded. We know that companies recognize and reward improvement as well as actual performance, however, so hiding problems may not always pay off for employees.

Frontline decision rights. The fundamental shift toward teams has also affected decision rights: Organizations are pushing them down to the front lines, equipping and empowering employees to operate more independently. But that’s a huge behavioral change, and people need support to pull it off. Let’s return to the Bank of Montreal example to illustrate how it can work. When BMO introduced agile teams to design some new customer services, senior leaders weren’t quite ready to give up control, and the people under them were not used to taking it. So the bank embedded agile coaches in business teams. They began by putting everyone, including high-level executives, through “retrospectives”—regular reflection and feedback sessions held after each iteration. These are the agile version of after-action reviews; their purpose is to keep improving processes. Because the retrospectives quickly identified concrete successes, failures, and root causes, senior leaders at BMO immediately recognized their value, which helped them get on board with agile generally and loosen their grip on decision making.

Complex team dynamics. Finally, since the supervisor’s role has moved away from just managing individuals and toward the much more complicated task of promoting productive, healthy team dynamics, people often need help with that, too. Cisco’s special Team Intelligence unit provides that kind of support. It’s charged with identifying the company’s best-performing teams, analyzing how they operate, and helping other teams learn how to become more like them. It uses an enterprise-wide platform called Team Space, which tracks data on team projects, needs, and achievements to both measure and improve what teams are doing within units and across the company.

Compensation.

Pay is changing as well. A simple adaptation to agile work, seen in retail companies such as Macy’s, is to use spot bonuses to recognize contributions when they happen rather than rely solely on end-of-year salary increases. Research and practice have shown that compensation works best as a motivator when it comes as soon as possible after the desired behavior. Instant rewards reinforce instant feedback in a powerful way. Annual merit-based raises are less effective, because too much time goes by.

Patagonia has actually eliminated annual raises for its knowledge workers. Instead the company adjusts wages for each job much more frequently, according to research on where market rates are going. Increases can also be allocated when employees take on more-difficult projects or go above and beyond in other ways. The company retains a budget for the top 1% of individual contributors, and supervisors can make a case for any contribution that merits that designation, including contributions to teams.

Upward feedback from employees to team leaders is valued in agile organizations.

Compensation is also being used to reinforce agile values such as learning and knowledge sharing. In the start-up world, for instance, the online clothing-rental company Rent the Runway dropped separate bonuses, rolling the money into base pay. CEO Jennifer Hyman reports that the bonus program was getting in the way of honest peer feedback. Employees weren’t sharing constructive criticism, knowing it could have negative financial consequences for their colleagues. The new system prevents that problem by “untangling the two, ” Hyman says.

DigitalOcean redesigned its rewards to promote equitable treatment of employees and a culture of collaboration. Salary adjustments now happen twice a year to respond to changes in the outside labor market and in jobs and performance. More important, DigitalOcean has closed gaps in pay for equivalent work. It’s deliberately heading off internal rivalry, painfully aware of the problems in hypercompetitive cultures (think Microsoft and Amazon). To personalize compensation, the firm maps where people are having impact in their roles and where they need to grow and develop. The data on individuals’ impact on the business is a key factor in discussions about pay. Negotiating to raise your own salary is fiercely discouraged. And only the top 1% of achievement is rewarded financially; otherwise, there is no merit-pay process. All employees are eligible for bonuses, which are based on company performance rather than individual contributions. To further support collaboration, DigitalOcean is diversifying its portfolio of rewards to include nonfinancial, meaningful gifts, such as a Kindle loaded with the CEO’s “best books” picks.

How does DigitalOcean motivate people to perform their best without inflated financial rewards? Matt Hoffman, its vice president of people, says it focuses on creating a culture that inspires purpose and creativity. So far that seems to be working. The latest engagement survey, via Culture Amp, ranks DigitalOcean 17 points above the industry benchmark in satisfaction with compensation.

Recruiting.

With the improvements in the economy since the Great Recession, recruiting and hiring have become more urgent—and more agile. To scale up quickly in 2015, GE’s new digital division pioneered some interesting recruiting experiments. For instance, a cross-functional team works together on all hiring requisitions. A “head count manager” represents the interests of internal stakeholders who want their positions filled quickly and appropriately. Hiring managers rotate on and off the team, depending on whether they’re currently hiring, and a scrum master oversees the process.

To keep things moving, the team focuses on vacancies that have cleared all the hurdles—no req’s get started if debate is still ongoing about the desired attributes of candidates. Openings are ranked, and the team concentrates on the top-priority hires until they are completed. It works on several hires at once so that members can share information about candidates who may fit better in other roles. The team keeps track of its cycle time for filling positions and monitors all open requisitions on a kanban board to identify bottlenecks and blocked processes. IBM now takes a similar approach to recruitment.

Companies are also relying more heavily on technology to find and track candidates who are well suited to an agile work environment. GE, IBM, and Cisco are working with the vendor Ascendify to create software that does just this. The IT recruiting company HackerRank offers an online tool for the same purpose.

Learning and development.

Like hiring, L&D had to change to bring new skills into organizations more quickly. Most companies already have a suite of online learning modules that employees can access on demand. Although helpful for those who have clearly defined needs, this is a bit like giving a student the key to a library and telling her to figure out what she must know and then learn it. Newer approaches use data analysis to identify the skills required for particular jobs and for advancement and then suggest to individual employees what kinds of training and future jobs make sense for them, given their experience and interests.

IBM uses artificial intelligence to generate such advice, starting with employees’ profiles, which include prior and current roles, expected career trajectory, and training programs completed. The company has also created special training for agile environments—using, for example, animated simulations built around a series of “personas” to illustrate useful behaviors, such as offering constructive criticism.

What HR Can Learn from Tech

The agile pioneers in the tech world are years ahead of everyone else in adopting the methodology at scale. So who better to provide guidance as managers and HR leaders grapple with how to apply agile talent practices throughout their organizations? In a recent survey, thousands of software developers across many countries and industries identified their biggest obstacles in scaling and the ways they got past them.

Traditionally, L&D has included succession planning—the epitome of top-down, long-range thinking, whereby individuals are picked years in advance to take on the most crucial leadership roles, usually in the hope that they will develop certain capabilities on schedule. The world often fails to cooperate with those plans, though. Companies routinely find that by the time senior leadership positions open up, their needs have changed. The most common solution is to ignore the plan and start a search from scratch. But organizations often continue doing long-term succession planning anyway. (About half of large companies have a plan to develop successors for the top job.) Pepsi is one company taking a simple step away from this model by shortening the time frame. It provides brief quarterly updates on the development of possible successors—in contrast to the usual annual updates—and delays appointments so that they happen closer to when successors are likely to step into their roles.

Ongoing Challenges

To be sure, not every organization or group is in hot pursuit of rapid innovation. Some jobs must remain largely rules based. (Consider the work that accountants, nuclear control-room operators, and surgeons do.) In such cases agile talent practices may not make sense.

And even when they’re appropriate, they may meet resistance—especially within HR. A lot of processes have to change for an organization to move away from a planning-based, “waterfall” model (which is linear rather than flexible and adaptive), and some of them are hardwired into information systems, job titles, and so forth. The move toward cloud-based IT, which is happening independently, has made it easier to adopt app-based tools. But people issues remain a sticking point. Many HR tasks, such as traditional approaches to recruitment, onboarding, and program coordination, will become obsolete, as will expertise in those areas.

Meanwhile, new tasks are being created. Helping supervisors replace judging with coaching is a big challenge not just in terms of skills but also because it undercuts their status and formal authority. Shifting the focus of management from individuals to teams may be even more difficult, because team dynamics can be a black box to those who are still struggling to understand how to coach individuals. The big question is whether companies can help managers take all this on and see the value in it.

The HR function will also require reskilling. It will need more expertise in IT support—especially given all the performance data generated by the new apps—and deeper knowledge about teams and hands-on supervision. HR has not had to change in recent decades nearly as much as have the line operations it supports. But now the pressure is on, and it’s coming from the operating level, which makes it much harder to cling to old talent practices.

There is little room for doubt that Russia interfered in the 2016 election. The Justice Department on Friday handed down indictments to 13 Russian people and three Russian companies for meddling in United States political and election processes, the latest item in a litany of evidence that Russia, well, did it.

Even scarier, there is every indication that Russia is likely to try to interfere in the American political process again — and many of the technologies, trends, and processes it exploited in the past are largely unchanged. (Catch that New York Times story on the Twitter bot factories?)

“I’ll tell you right up front, it is going to happen again,” Greg Touhill, a retired Air Force general officer and one of the nation’s premier cybersecurity experts, told me. Touhill is currently president of Cyxtera Federal Group, a secure infrastructure company. Before that, he served in a wide range of government roles, including as the first United States chief information security officer in 2016.

I spoke with Touhill about what the United States can do to stop Russia from interfering in US politics and elections in 2018 and beyond. While the federal government certainly has a major role to play — in deterring future interference, in supporting state and local election officials, and in boosting national security efforts — Touhill noted that the technology companies Russians use as a conduit in their disinformation campaign have a responsibility as well.

So do everyday Americans, in using good judgment when they’re reading news sources: “If it sounds phony, it probably is,” he said.

This interview has been edited and condensed for clarity.

Emily Stewart

We keep getting more details about Russian meddling in the 2016 election, including Friday’s indictments, and we’re also seeing warnings that Russians are likely to try something again in 2018. What can and should the federal government and other entities be doing so that we don’t see this happen again?

Greg Touhill

I’ll tell you right up front, it is going to happen again. It’s happened before, and frankly, it’s happened throughout all of time. A different way to phrase it is how do we prepare ourselves to deal with this when it happens again? And how do we mitigate it and the like?

Information operations, influence operations, or whatever you want to call it — and different nations call it different things — people have recognized, as Francis Bacon used to say, knowledge is power. They’re constantly trying to seek the ability to influence and get knowledge and get an information advantage. From my perch, I think that we want to deter further action, we want to mitigate it when it does happen, and we want to take action that’s effective and proportionate when we do detect that somebody is breaking international norms.

Emily Stewart

How do you balance deterrence of future action against retaliation or punishment of past action? How would you approach it?

Greg Touhill

If you take a look at all the different instruments of power that are available to the United States, we have the military option, which as a retired officer I think should be the last resort, but certainly it should be on the table for consideration, particularly when it comes to deterrence. We also have the political, the economic, and the diplomatic means as well.

First things first is you have to — when you see somebody who is breaking norms and is engaged in things that we don’t believe as an international community are the right things to do — you need to confront that, and you need to present the evidence that says, “Hey, here is where you are breaking the norms.”

We have been working, from the United States government, on a very leadership, forward-thinking approach to cyber norms. That should be a priority in the international community, and the United States should take a continuous leadership role in making sure that we have a clear understanding and articulation of acceptable behavior in the cyber domain, and affirmation of the cyber norms that have been already proposed needs to be a priority for our diplomat efforts.

Secondly, when we see folks that are deviating from those norms, there needs to be some accountability, and that’s where we have the ability under our current legal framework to issue economic sanctions, diplomatic sanctions, and, in [Friday’s] case, legal indictments, where we are trying to hold individuals and states accountable for violating law and, as I mentioned, norms of acceptable behavior.

Emily Stewart

What agencies or entities within the government need to take the lead here?

Greg Touhill

Frankly, this is a whole of government issue. And as you take a look at all those instruments of national power, it’s distributed across departments and agencies. That’s a reason why in 1947 we established the National Security Council to help coordinate a lot of the activities dealing with national security.

I would submit that our national security and our national prosperity is intrinsically linked to cybersecurity and the integrity of information technology and the information that’s contained within it. You name me a business or an institution or a societal institution itself that doesn’t rely on IT right now, it’s very difficult. As we take a look at the roles across the federal government — the Department of State, the Department of Treasury, the Department of Homeland Security, the Department of Defense, the Department of Commerce, the Department of Justice — virtually every single major department and agency has a stake in those elements of national power that we could use and leverage to deal with issues of deterrence and proper response to cyberattacks.

The National Security Council, working under the National Command Authority, that’s where I’m looking for leadership to coordinate all instruments of national power.

Emily Stewart

What about the president? On Friday, the indictments come down, and he says, “No collusion!”

Greg Touhill

I don’t necessarily see the discussion of collusion being the same as to acknowledge that we have an issue with Russian-based actors engaged in influence operations against the United States. I took the collusion issue as a separate domestic issue as opposed to the actual influence operations.

I believe that the evidence we’ve seen thus far points toward Russian-based actors engaged in targeted influence operations directed against the people of the United States with what appears to be an ultimate goal to undermine democratic institutions in the United States.

Emily Stewart

Well, but Trump doesn’t seem hyper concerned about Russia; he seems to be downplaying it.

Greg Touhill

I don’t know President Trump, nor do I know his leadership style, so I really can’t comment on that.

It’s very possible, and I wouldn’t rule it out, that he has directed the National Security Council to provide him different options, and as you take a look at activities at [a] nation-state level, many of those deliberations are going to be held in very classified settings. At this point, I really can’t comment because I don’t know what he’s directing in the background. Nor would I expect, if it were President Obama or President Bush or President Clinton or any of his predecessors — this is really an important topic, and I’m confident that the National Security Council is in fact looking at all different options that would be on the table and advising the president as such.

Emily Stewart

Beyond the government and the president, what do companies like Facebook and Twitter, which seem to be a major part of what happened in 2016, need to be doing?

Greg Touhill

If you look at it through the lens of cybersecurity, I think there are three major lenses: people, process, and technology. You’re taking a look at all sorts of different media platforms that could include Twitter, Facebook, and the like, which under social media are powerful platforms. You want to make sure you get it right.

You want to make sure that your people are properly trained to maintain the integrity of product and information that you’re putting out. You want to make sure that you have the proper processes in place to properly vet input so that you, in fact, are not putting out, for lack of a better term, “fake news.” It’s almost like yelling, “Fire!” in a movie theater: You want to make sure that you are, in fact, accurate and that your product is trusted. You want to put in right technologies to make sure that you have positive control over that information that you’re sharing.

There are plenty of tools that are currently developed and being fielded right now that can help on the technology standpoint, and certainly training and processes are part of good order and discipline in any business these days. From a technology standpoint, you should not let anybody have access to your information or equipment or systems and the like.

Having positive control over the platforms themselves is critically important. Technologies such as software-defined perimeters that are identity-centric and really go down and validate authorities and identities prior to connecting and doing authorization first and connection second as a technology is critically important. As you see more and more companies that want to make sure they have positive control over their tech to protect the information inside it are switching to things like software-defined perimeters, regardless of what industry they’re in — finance, social media, etc.

I am heartened, though, by the rhetoric of some of the companies, where they’re coming out and saying, “Hey, we’re putting things in so people, if they see something, they can say something, question whether or not this is fake news.” That’s a step in the right direction, but I want to see more.

Emily Stewart

I’m interested in this question of whether social media companies need to know their customers. Banks are subject to know-your-customer and anti-money laundering laws; can’t technology companies be too? At the same time, with those sorts of regulations, you tend to hear protests on the First Amendment front — namely, shouldn’t people be able to say whatever they want, presumably, on Twitter, even if it is a bot?

Greg Touhill

That’s gets back to yelling, “Fire!” in a movie theater. There was a great debate about 100 years ago as to First Amendment rights. Do you have the right to yell, “Fire!” in the movie theater if public safety is at risk? If we take a look at different companies that are out there, do they in fact have the code of ethics to make sure the information presented is in fact proper?

Google, what’s their theme? Do no harm, right? If Google is serving up info that may in fact be harmful, is that contrary to their own ethics? It’s a heavy issue, and I’m not necessarily a philosopher, but professor Touhill would tell you that you’ve got a great capability, and technology doesn’t always solve every problem. Leadership is needed at all levels, including in the technology areas to try to combat this problem.

And as I also tell my mother, you need to not draw conclusions from a single news source; you need to go survey the whole landscape. I believe that freedom of the press here in the United States is one of our greatest strengths, and I expect the press to do their bit too, to make sure that when they’re seeing fake news they’re pulling it out so that we can, in fact, all work together as a team, as a people, to make sure that the general population gets the right news, the truth. That’s what we’re all looking for. It’s more than just technology.

Emily Stewart

Along those lines, beyond the government, tech companies, the press, what about me, sitting at home on my computer? Is there some role citizens need to play in this in being smarter in the way that they consume news and information?

Greg Touhill

There are some very straightforward things that every citizen can and should be doing.

One is don’t believe everything you see online. Do your homework, go check multiple sources, make sure that you are staying away from suspicious websites, go to news sources that are trusted and maintain that same level of integrity as you would hope that you would be promoting yourself. You want to get your news from folks who will double-check and triple-check their sources, that are unimpeachable, that recognize their responsibility. And if it’s coming from a news source that you don’t know, then it’s probably not necessarily a trusted source. That’s the first thing.

Second thing, follow the advice I gave my mother — get your news from multiple sources. There’s more than one network on TV, and there’s more than one newspaper online. The great news organizations have at their core the same story, but they give you different analyses, different perspectives. If you want to be better educated into the news, you’re better served by understanding those different perspectives. Make sure that you’re doing your homework and not necessarily going to just one news source.

Third, if it sounds phony, it probably is. Dig deeper when you see things that seem outrageous. You may find that things that are particularly outrageous, if it’s not coming from a trusted news source, it’s probably is made up.

Emily Stewart

In wrapping up, going forward, just looking at the next six months, if you could pick out three things that the federal government could do to safeguard election integrity, what do you think they should do?

Greg Touhill

Number one, work with state governments — state, local, county, tribal, territorial governments — because all elections are managed locally. The federal government does not go out and do voter registration; the federal government does not do the collection of votes, and the federal government does not do the tabulation of votes. That’s all done locally and up to the state level.

Its’ really important for the federal government to work with the states and the counties to make sure they are hardened. I mentioned those three processes — voter registration, the actual casting of the ballot, and the actual tabulation, counting the votes — three individual processes that are all critical.

That’s all done at the state level, [but] the federal government can assist the states on that. They can assist with best practices, and having been director of the NCCIC for a while — that’s the National Cybersecurity and Communications Integration Center — which has the US-CERT and the ICS-CERT, the industrial control systems certification, we went out and reached out to the secretaries of state in different states and offered assistance.

There’s a lot of discussion right now as to how the states want to use the capabilities and best practices and the like, but I think that’s something that still needs to be at the top of the agenda at the state level as well as within the Department of Homeland Security to help.

Two, from an influence operations standpoint, we have to do counter influence operations, and I think we’ve already started a lot of that. We need to make sure that the American people understand that there are influence operations that are, in fact, being conducted against us, and the media has been really good as of late, for example, highlighting the fact that we had the major intelligence leaders testifying before Congress this past week, raising that alert.

The next step is for the federal government to actually have a plan on how to educate and inform citizens as to, “What do I need to do in an environment where influence operations are ongoing?” That’s going to be very difficult for the United States government to do given the fact that we cherish freedom of the press and our First Amendment, but we do need to make sure that we have an educated and informed populace.

The third thing that the federal government should be doing, in my opinion, is be[ing] very clear from a deterrence standpoint what the consequences would be for any entity that is trying to interfere with our free and open democratic processes. There should be accountability. There should be activity leveraging diplomatic and other instruments of national power to deter any entity from attacking our most cherished democratic institutions.

“At UC Davis, we acknowledge and honor exemplary faculty, staff, students and community members who help to cultivate an atmosphere of inclusiveness. They speak to the heart of what makes our campus and region a great place to work, teach, learn, play and live.”

PRINCIPLES OF COMMUNITY WEEK

The UC Davis Principles of Community, which are among the underpinnings of the Chancellor’s Achievement Awards for Diversity and Community, get some recognition of their own next week — during our annual Principles of Community Week, Feb. 26-March 3.

Networking luncheons will be held on the Davis and Sacramento campuses, while the Davis campus also will be the venue for Multicultural Awareness Night, the Latino Film Festival and Dialogue on Allyship.

This is part of what Gary S. May had to say Feb. 6 in presenting the 2018 Chancellor’s Achievement Awards for Diversity and Community to eight individuals — in the categories of Academic Senate, Academic Federation, undergraduate, graduate student, postdoctoral, staff, special recognition and community — and three departments.

The awards ceremony took place in the early evening at the Chancellor’s Residence. “This event is a perfect way to cap my workday,” May said. “The spirit of these awards speaks to me deeply on a personal and professional level” — as a college student who remembers well the feeling of being the only person of color in the lecture halls and laboratories, and as an engineering professor and dean working hard to change that, especially for students from ethnic groups that are underrepresented in the STEM fields.

“UC Davis’ strong commitment to diversity is one of the key reasons I wanted to come here,” May said. “I wanted to be part of a community that deliberately recruits, retains, embraces and celebrates people with backgrounds, gender identities and skill sets that are underrepresented in higher education. I wanted to be part of a community that honors the promoters of socio-economic mobility who we are celebrating today.”

Here are the 2018 award recipients, with comments about them condensed from nomination forms and remarks from the awards ceremony, delivered by Rahim Reed, associate executive vice chancellor, Office of Campus Community Relations. You can read more about the awardees here.

Individual award recipients

Academic Senate: Natalia Deeb-Sossa

Associate professor of Chicano/a studies, recognized for her socially and politically engaged scholarship, community outreach and contributions to marginalized communities. For example, she founded the Knights Landing Bridge Program, now known as the UC Davis Chicana/o Bridge Program to reflect that UC Davis students provide “bridge” tutoring not only in Knights Landing but in other rural communities, as well. “As a professor, she is highly regarded by her students who often highlight her willingness to support them beyond traditional teaching duties.”

Academic Federation: Jorge Garcia

Clinical professor of internal medicine; and interim associate director, Office of Student and Resident Diversity. “His efforts have helped to ensure that UC Davis welcomes diversity with open arms. … Although he is an accomplished physician he has never forgotten the awkwardness and isolation he felt in embarking on a career in medicine, and then in academic medicine. This is why Dr. Garcia relishes his position as a role model and inspirational coach for underrepresented students in medicine.”

Undergraduate: Samantha Chiang

She is a fourth-year, English major and Asian American studies minor, and a former ASUCD senator (2016-17). “Her passion for assisting marginalized and underrepresented communities is a reflection of her deep desire to create a more equitable and inclusive campus environment.” She is the founding director of the UC Davis Mental Health Initiative, which runs the annual mental health conference and awareness month, and has also worked in the areas of disability rights and cultural competency training. She worked with Student Health and Counseling Services to create translated insurance documents in Mandarin and Spanish.

Graduate Student: Hung Doan

This plant pathology student believes that service is at the heart of scholarship. He mentors undergraduates from underrepresented groups, and he works to alleviate food insecurity within the UC Davis student community (especially among underrepresented students) and in the surrounding community. Since 2011, he has worked as coordinator and head cook for the student-run soup kitchen HELP, which stands for Help and Education Leading to the Prevention of Poverty.

Postdoctoral: Lauren Libero

She studies at the MIND Institute, where she is the volunteer co-leader of a social skills program for autistic adults and family members, and a support group leader. One of those groups, for family members of people on the autism spectrum, was on the verge of shutting down, due to a staff retirement, until Ribero advocated to keep it going with her as the lead staff member. She started a support group for women on the autism spectrum, and mentors children and young adults in theater and improvisation to enhance their communication skills.

Staff Award: Lina Mendez

Associate director, Center for Chicanx and Latinx Academic Student Success. “Through her research as well as her lived experiences and journey in support of the Chicanx and Latinx student communities, she has focused on channeling their potential in the pursuit of educational excellence, while also working to shape the institutions that serve them” — including the Center for Educational Effectiveness (as a graduate student) and the UC Davis Health Center for Reducing Health Disparities (as a post-doc).

Special Recognition: Barbara Ashby

The manager of WorkLife and Wellness has devoted her career to program and policy development in support of women, children and families. She secured grants and other funding to assist student parents with child care expenses, and established three child care facilities serving more than 300 children. She founded the Breastfeeding Support Program, and she also was instrumental in workplace flexibility policy. More recently she collaborated with the Women’s Resources and Research Center to establish the Caregiver Support Group and Education Program.

Community Achievement: Cassandra Jennings

President and chief executive officer, Greater Sacramento Urban League, who formerly worked in Sacramento city government and at the Sacramento Housing and Redevelopment Agency, including six years as deputy executive director. In her three years in the Urban League’s top leadership post, she has assisted UC Davis’ outreach efforts in underserved communities in Sacramento through Sacramento Area Youth Speaks, or SAYS, a UC Davis-run program that is now co-located at Urban League headquarters in Del Paso Heights.

Honorary awards

The campus introduced this category last year to recognize departments and divisions for taking the initiative to include training in diversity and inclusion as part of organizational and staff development.

“These efforts are in support of the UC Davis Diversity and Inclusion Initiative, and it is our hope that the campus community will be inspired by these organizations’ proactive measures in operationalizing our Principles of Community, and in striving towards a more diverse and inclusive UC Davis,” Reed said.

UC Davis Health Information Technology Division —It has worked with UC Davis Health’s Office for Equity, Diversity and Inclusion the last two years to offer diversity and inclusion training to 70 IT supervisors. Management training includes “The Impact of Unconscious Bias on Workplace Teams” and “Understanding Generational Differences” to help improve communication, teamwork and employee engagement. Individual teams are encouraged to arrange their own trainings, say, with speakers from the Harassment and Discrimination Assistance and Prevention Program, or HDAPP. The Office for Equity, Diversity and Inclusion will host four Diversity and Inclusion Dialogues for the IT division to assist in building a culture of lifelong learning in diversity and inclusion.

Editor’s note about the photo caption and award summary above: As originally published, we gave the incorrect title of the unit being honored. It is the UC Davis Health Information Technology Division, as corrected above. We apologize for the error.

Gutierrez-Montoya

School of Medicine Postbaccalaureate Program — This is a one-year program designed to help educationally and/or socio-economically disadvantaged students become more competitive applicants to medical school. The program partners with the Office of Campus Community Relations for sessions on unpacking oppression, microaggressions and stereotype threat, and weaves these topics into conversations about understanding diversity, and to further develop students’ critical thinking skills. The Postbaccalaureate Program participates in the Campus Community Book Project to further inform students’ understanding of equity issues and how they translate to the health care fields. Elio A. Gutierrez, program coordinator, accepted the award, which also recognized Jose A. Morfin of the Department of Nephrology.

Student Housing and Dining Services — All leads and managers undergo professional development training on “Understanding Diversity,” “Anti-Bullying,” Cross-Cultural Communication” and “Conflict Management,” all meant to encourage staff to live and practice the Principles of Community at work, among colleagues, and with the campus community members they serve. Student Housing and Dining Services also ensures that their student staff, especially those who work in advising capacities, are exposed to the Campus Community Book Project, integrating the chosen book as part of student staff training.

Executive Summary

Face recognition is poised to become one of the most pervasive surveillance technologies, and law enforcement’s use of it is increasing rapidly. Today, law enforcement officers can use mobile devices to capture face recognition-ready photographs of people they stop on the street; surveillance cameras boast real-time face scanning and identification capabilities; and federal, state, and local law enforcement agencies have access to hundreds of millions of images of faces of law-abiding Americans. On the horizon, law enforcement would like to use face recognition with body-worn cameras, to identify people in the dark, to match a person to a police sketch, or even to construct an image of a person’s face from a small sample of their DNA.

However, the adoption of face recognition technologies like these is occurring without meaningful oversight, without proper accuracy testing of the systems as they are actually used in the field, and without the enactment of legal protections to prevent internal and external misuse. This has led to the development of unproven, inaccurate systems that will impinge on constitutional rights and disproportionately impact people of color.

Without restrictive limits in place, it could be relatively easy for the government and private companies to build databases of images of the vast majority of people living in the United States and use those databases to identify and track people in real time as they move from place to place throughout their daily lives. As researchers at Georgetown posited in 2016, one out of two Americans is already in a face recognition database accessible to law enforcement.1

This white paper takes a broad look at the problems with law enforcement use of face recognition technology in the United States. Part 1 provides an overview of the key issues with face recognition, including accuracy, security, and impact on privacy and civil rights. Part 2 focuses on FBI’s face recognition programs, because FBI not only manages the repository for most of the criminal data used by federal, state, local, and tribal law enforcement agencies across the United States, but also provides direct face recognition services to many of these agencies, and its systems exemplify the wider problems with face recognition. After considering these current issues, Part 3 looks ahead to potential future face recognition capabilities and concerns. Finally, Part 4 presents recommendations for policy makers on the limits and checks necessary to ensure that law enforcement use of face recognition respects civil liberties.

Part 1 provides a brief introduction to how face recognition works before exploring areas in which face recognition is particularly problematic for law enforcement use, presenting the following conclusions:

When the uncertainty and inaccuracy inherent in face recognition technology inform law enforcement decisions, it has real-world impact. An inaccurate system will implicate people for crimes they did not commit. And it will shift the burden onto defendants to show they are not who the system says they are.

Face recognition uniquely impacts civil liberties. The accumulation of identifiable photographs threatens important free speech and freedom of associations rights under the First Amendment, especially because such data can be captured without individuals’ knowledge.

Face recognition disproportionately impacts people of color. Face recognition misidentifies African Americans and ethnic minorities, young people, and women at higher rates than whites, older people, and men, respectively.2 Due to years of well-documented, racially biased police practices, all criminal databases—including mugshot databases—include a disproportionate number of African Americans, Latinos, and immigrants.3 These two facts mean people of color will likely shoulder significantly more of the burden of face recognition systems’ inaccuracies than whites.

The collection and retention of face recognition data poses special security risks. All collected data is at risk of breach or misuse by external and internal actors, and there are many examples of misuse of law enforcement data in other contexts.4 Face recognition poses additional risks because, unlike a social security number or driver’s license number, we can’t change our faces. Law enforcement must do more to explain why it needs to collect so much sensitive biometric and biographic data, why it needs to maintain it for so long, and how it will safeguard it from breaches.

Part 2 explores how FBI’s face recognition programs exemplify these and other problems. FBI has positioned itself to be the central source for face recognition identification for not only federal but also state and local law enforcement agencies. FBI collects its own data, maintains data provided by state and local agencies, and facilitates access to face recognition data for more than 23,000 law enforcement agencies across the country and around the world. This makes it particularly important to look closely at FBI’s system, as its issues are likely present in other law enforcement systems.

After describing FBI’s internal and external face recognition programs—including the Next Generation Identification database and Interstate Photo System—and access to external data, Part 2 highlights three of FBI’s most urgent failures related to face recognition:

FBI has failed to address the problem of face recognition inaccuracy. The minimal testing and reporting conducted by FBI showed its own system was incapable of accurate identification at least 15 percent of the time. However, it refuses to provide necessary information to fully evaluate the efficacy of its system, and it refuses to update testing using the current, much larger database.

For years, FBI has failed to meet basic transparency requirements as mandated by federal law about its Next Generation Identification database and its use of face recognition. The agency took seven years, for example, to update its Privacy Impact Assessment for its face recognition database, and failed to release a new one until a year after the system was fully operational.

The scope of FBI’s face recognition programs is still unclear. The public still does not have as much information as it should about FBI’s face recognition systems and plans for their future evolution.

Part 3 looks toward face recognition capabilities and concerns on the horizon, including the use of face recognition with police body-worn cameras, crowd photos, and social media photos.

Finally, Part 4 provides proposals for change. In particular, it provides a roadmap to policy makers considering face recognition legislation. It recommends concrete and specific technical and legal limits to place meaningful checks on government use of face recognition technology.

People should not be forced to submit to criminal face recognition searches merely because they want to drive a car. They should not have to worry their data will be misused by unethical government officials with unchecked access to face recognition databases. They should not have to fear that their every move will be tracked if face recognition is linked to the networks of surveillance cameras that blanket many cities. Without meaningful legal protections, this is where we may be headed.

Part 1: How Does Face Recognition Work and What Are The Risks?

What is Face Recognition and How Does it Work?

Face recognition is a type of biometric identification. Biometrics are unique markers that identify or verify the identity of someone using their intrinsic physical or behavioral characteristics. Fingerprints are the most commonly known biometric, and they have been used regularly by criminal justice agencies to identify people for over a century. Other biometrics like face recognition, iris scans, palm prints, voice prints, wrist veins, a person’s gait, and DNA are becoming increasingly common.

Face recognition systems use computer algorithms to pick out specific, distinctive details about a person’s face from a photograph, a series of photographs, or a video segment. These details, such as the distance between the eyes or the shape of the chin, are then converted into a mathematical representation and compared to data on other faces previously collected and stored in a face recognition database. The data about a particular face is often called a “face template.” It is distinct from a photograph because it is designed to only include certain details that can be used to distinguish one face from another.

The data that comprises a face template is distinct from a photograph because it is designed to only include certain details that can be used to distinguish one face from another. Source: Iowa Department of Transportation

Face recognition systems are generally designed to do one of three things. First, a system may be set up to identify an unknown person. For example, a police officer would use this type of system to try to identify an unknown person in footage from a surveillance camera. The second type of face recognition system is set up to verify the identity of a known person. Smartphones rely on this type of system to allow you to use face recognition to unlock your phone. A third type, which operates similarly to a verification system, is designed to look for multiple specific, previously-identified faces. This system may be used, for example, to recognize card counters at a casino, or certain shoppers in a store, or wanted persons on a crowded subway platform.

Instead of positively identifying an unknown person, many face recognition systems are designed to calculate a probability match score between the unknown person and specific face templates stored in the database. These systems will offer up several potential matches, ranked in order of likelihood of correct identification, instead of just returning a single result. FBI’s system works this way.

Accuracy Challenges

Face recognition systems vary in their ability to identify people, and no system is 100 percent accurate under all conditions. For this reason, every face recognition system should report its rate of errors, including the number of false positives (also known as the “false accept rate” or FAR) and false negatives (also known as the “false reject rate” or FRR).

A “false positive” is generated when the face recognition system matches a person’s face to an image in a database, but that match is incorrect. This is when a police officer submits an image of “Joe,” but the system erroneously tells the officer that the photo is of “Jack.”

A “false negative” is generated when the face recognition system fails to match a person’s face to an image that is, in fact, contained in a database. In other words, the system will erroneously return zero results in response to a query. This could happen if, for example, you use face recognition to unlock your phone but your phone does not recognize you when you try to unlock it.

When researching a face recognition system, it is important to look closely at the “false positive” rate and the “false negative” rate, because there is almost always a trade-off. For example, if you are using face recognition to unlock your phone, it is better if the system fails to identify you a few times (false negative) than if it misidentifies other people as you and lets those people unlock your phone (false positive). Matching a person’s face to a mugshot database is another example. In this case, the result of a misidentification could be that an innocent person is treated as a violent fugitive and approached by the police with weapons drawn or even goes to jail, so the system should be designed to have as few false positives as possible.

Technical issues endemic to all face recognition systems mean false positives will continue to be a common problem for the foreseeable future. Face recognition technologies perform well when all the photographs are taken with similar lighting and from a frontal perspective (like a mug shot). However, when photographs that are compared to one another contain different lighting, shadows, backgrounds, poses, or expressions, the error rates can be significant.5 Face recognition is also extremely challenging when trying to identify someone in an image shot at low resolution6 or in a video,7 and performs worse overall as the size of the data set (the population of images you are checking against) increases, in part because so many people within a given population look similar to one another. Finally, it is also less accurate with large age discrepancies (for example, if people are compared against a photo taken of themselves when they were ten years younger).

Unique Impact on Civil Liberties

Some proposed uses of face recognition would clearly impact Fourth Amendment rights and First Amendment-protected activities and would chill speech. If law enforcement agencies add crowd, security camera, and DMV photographs into their databases, anyone could end up in a database without their knowledge—even if they are not suspected of a crime—by being in the wrong place at the wrong time, by fitting a stereotype that some in society have decided is a threat, or by engaging in “suspect” activities such as political protest in public spaces rife with cameras. Given law enforcement’s history of misuse of data gathered based on people’s religious beliefs, race, ethnicity, and political leanings, including during former FBI director J. Edgar Hoover’s long tenure and during the years following September 11, 2001,8Americans have good reason to be concerned about expanding government face recognition databases.

Like other biometrics programs that collect, store, share, and combine sensitive and unique data, face recognition technology poses critical threats to privacy and civil liberties. Our biometrics are unique to each of us, can’t be changed, and often are easily accessible. Face recognition, though, takes the risks inherent in other biometrics to a new level because it is much more difficult to prevent the collection of an image of your face. We expose our faces to public view every time we go outside, and many of us share images of our faces online with almost no restrictions on who may access them. Face recognition therefore allows for covert, remote, and mass capture and identification of images.9 The photos that may end up in a database could include not just a person’s face but also how she is dressed and possibly whom she is with.

Face recognition and the accumulation of easily identifiable photographs implicate free speech and freedom of association rights and values under the First Amendment, especially because face-identifying photographs of crowds or political protests can be captured in public, online, and through public and semi-public social media sites without individuals’ knowledge.

Law enforcement has already used face recognition technology at political protests. Marketing materials from the social media monitoring company Geofeedia bragged that, during the protests surrounding the death of Freddie Gray while in police custody, the Baltimore Police Department ran social media photos against a face recognition database to identify protesters and arrest them.10

Government surveillance like this can have a real chilling effect on Americans’ willingness to engage in public debate and to associate with others whose values, religion, or political views may be considered different from their own. For example, researchers have long studied the “spiral of silence”— the significant chilling effect on an individual’s willingness to publicly disclose political views when they believe their views differ from the majority.11 In 2016, research on Facebook users documented the silencing effect on participants’ dissenting opinions in the wake of widespread knowledge of government surveillance—participants were far less likely to express negative views of government surveillance on Facebook when they perceived those views were outside the norm.12

In 2013, a study involving Muslims in New York and New Jersey found that excessive police surveillance in Muslim communities had a significant chilling effect on First Amendment-protected activities.13 Specifically, people were less inclined to attend mosques they thought were under government surveillance, to engage in religious practices in public, or even to dress or grow their hair in ways that might subject them to surveillance based on their religion. People were also less likely to engage with others in their community who they did not know for fear any such person could either be a government informant or a radical. Parents discouraged their children from participating in Muslim social, religious, or political movements. Business owners took conscious steps to mute political discussion by turning off Al-Jazeera in their stores, and activists self-censored their comments on Facebook.14

These examples show the real risks to First Amendment-protected speech and activities from excessive government surveillance—especially when that speech represents a minority or disfavored viewpoint. While we do not yet appear to be at the point where face recognition is being used broadly to monitor the public, we are at a stage where the government is building the databases to make that monitoring possible. We must place meaningful checks on government use of face recognition now before we reach a point of no return.

Disproportionate Impact on People of Color

The false-positive risks discussed above will likely disproportionately impact African Americans and other people of color.15 Research—including research jointly conducted by one of FBI’s senior photographic technologists—found that face recognition misidentified African Americans and ethnic minorities, young people, and women at higher rates than whites, older people, and men, respectively.16 Due to years of well-documented racially-biased police practices, all criminal databases—including mugshot databases—include a disproportionate number of African Americans, Latinos, and immigrants.17 These two facts mean people of color will likely shoulder exponentially more of the burden of face recognition inaccuracies than whites.

False positives can alter the traditional presumption of innocence in criminal cases by placing more of a burden on suspects and defendants to show they are not who the system identifies them to be. This is true even if a face recognition system offers several results for a search instead of one; each of the people identified could be brought in for questioning, even if there is nothing else linking them to the crime. Former German Federal Data Protection Commissioner Peter Schaar has noted that false positives in face recognition systems pose a large problem for democratic societies: “[I]n the event of a genuine hunt, [they] render innocent people suspects for a time, create a need for justification on their part and make further checks by the authorities unavoidable.”18

Face recognition accuracy problems also unfairly impact African American and minority job seekers who must submit to background checks. Employers regularly rely on FBI’s data, for example, when conducting background checks. If job seekers’ faces are matched mistakenly to mug shots in the criminal database, they could be denied employment through no fault of their own. Even if job seekers are properly matched to a criminal mug shot, minority job seekers will be disproportionately impacted due to the notorious unreliability of FBI records as a whole. At least 50 percent of FBI’s arrest records fail to include information on the final disposition of the case: whether a person was convicted, acquitted, or if charges against them were dropped.19 Because at least 30 percent of people arrested are never charged with or convicted of any crime, this means a high percentage of FBI’s records incorrectly indicate a link to crime. If these arrest records are not updated with final disposition information, hundreds of thousands of Americans searching for jobs could be prejudiced and lose work. Due to disproportionately high arrest rates, this uniquely impacts people of color.

Security Risks Posed by the Collection and Retention of Face Recognition Data

All government data is at risk of breach and misuse by insiders and outsiders. However, the results of a breach of face recognition or other biometric data could be far worse than other identifying data, because our biometrics are unique to us and cannot easily be changed.

The many recent security breaches, email hacks, and reports of falsified data—including biometric data—show that the government needs extremely rigorous security measures and audit systems in place to protect against data loss. In 2017, hackers took over 123 of Washington D.C.’s surveillance cameras just before the presidential inauguration, leaving them unable to record for several days.20 During the 2016 election year, news media were consumed with stories of hacks into email and government systems, including into United States political organizations and online voter registration databases in Illinois and Arizona.21 In 2015, sensitive data stored in Office of Personnel Management (OPM) databases on more than 25 million people was stolen, including biometric information, addresses, health and financial history, travel data, and data on people’s friends and neighbors.22 More than anything, these breaches exposed the vulnerabilities in government systems to the public—vulnerabilities that the United States government appears to have known for almost two decades might exist.23

The risks of a breach of a government face recognition database could be much worse than the loss of other data, in part because one vendor—MorphoTrust USA—has designed the face recognition systems for the majority of state driver’s license databases, federal and state law enforcement agencies, border control and airports (including TSA PreCheck), and the State Department. This means that software components and configuration are likely standardized across all systems, so one successful breach could threaten the integrity of data in all databases.

Vulnerabilities exist from insider threats as well. Past examples of improper and unlawful police use of driver and vehicle data suggest face recognition data will also be misused. For example, a 2011 state audit of law enforcement access to driver information in Minnesota revealed “half of all law-enforcement personnel in Minnesota had misused driving records.”24 In 2013, the National Security Agency’s Inspector General revealed NSA workers had misused surveillance records to spy on spouses, boyfriends, and girlfriends, including, at times, listening in on phone calls. Another internal NSA audit revealed the “unauthorized use of data about more than 3,000 Americans and green-card holders.”25 Between 2014 and 2015, Florida’s Department of Highway Safety and Motor Vehicles reported about 400 cases of improper use of its Driver and Vehicle Information Database.26 And a 2016 Associated Press investigation based on public records requests found that “[p]olice officers across the country misuse confidential law enforcement databases to get information on romantic partners, business associates, neighbors, journalists and others for reasons that have nothing to do with daily police work.”27

Many of the recorded examples of database and surveillance misuse involve male officers targeting women. For example, the AP study found officers took advantage of access to confidential information to stalk ex-girlfriends and look up home addresses of women they found attractive.28 A study of England’s surveillance camera systems found the mostly male operators used the cameras to spy on women.29 In 2009, FBI employees were accused of using surveillance equipment at a charity event at a West Virginia mall to record teenage girls trying on prom dresses.30 In Florida, an officer breached the driver and vehicle database to look up a local female bank teller he was interested in.31 More than 100 other Florida officers accessed driver and vehicle information for a female Florida state trooper after she pulled over a Miami police officer for speeding.32 In Ohio, officers looked through a law enforcement database to find information on an ex-mayor’s wife, along with council people and spouses. 33 And in Illinois, a police sergeant suspected of murdering two ex-wives was found to have used police databases to check up on one of his wives before she disappeared.34

It is unclear what, if anything federal and state agencies have done to improve the security of their systems and prevent insider abuse. In 2007, the Government Accountability Office (GAO) specifically criticized FBI for its poor security practices. GAO found, “[c]ertain information security controls over the critical internal network reviewed were ineffective in protecting the confidentiality, integrity, and availability of information and information resources.”35 Given all of this—and the fact that agencies often retain personal data longer than a person’s lifetime36—law enforcement agencies must do more to explain why they need to collect so much sensitive biometric and biographic data, why they need to maintain it for so long, and how they will safeguard the data from the data breaches we know will occur in the future.

Part 2: FBI’s Face Recognition Databases and Systems

FBI’s face recognition databases and systems—and the critical problems with them—shed light on broader issues with law enforcement use of face recognition. State and local law enforcement agencies across the country both provide and use much of the data that makes up FBI’s main biometric database. With FBI acting as a national repository for law enforcement face recognition data, it is important to look closely at its flaws, in particular its inaccuracy, lack of transparency and oversight, and unclear scope.

Much of what we now know about FBI’s use of face recognition comes from a scathing report issued in 2016 by the federal Government Accountability Office (GAO).37 This report revealed, among other things, that FBI could access nearly 412 million images—most of which were taken for non-criminal reasons like obtaining a driver’s license or a passport. The report chastised FBI for being less than transparent with the public about its face recognition programs and security issues.

FBI’s Internal and External Access to Face Recognition Data

The Next Generation Identification Database and Interstate Photo System

FBI’s Next Generation Identification system (NGI) is a massive biometric database that includes fingerprints, iris scans, and palm prints collected from millions of individuals, not just as part of an arrest, but also for non-criminal reasons like background checks, state licensing requirements, and immigration. The Interstate Photo System (IPS) is the part of NGI that contains photographs searchable through face recognition. Each of the biometric identifiers in NGI is linked to personal, biographic, and identifying information, and, where possible, each file includes multiple biometric identifiers. FBI has designed NGI to be able to expand in the future as needed to include “emerging biometrics,” such as footprint and hand geometry, tattoo recognition, gait recognition, and others.38

NGI incorporates both criminal and civil records. NGI’s criminal repository includes records on people arrested at the local, state, and federal levels as well as biometric data taken from crime scenes and data on missing and unidentified persons. NGI’s civil repository stores biometric and biographic data collected from members of the military and those applying for immigration benefits. It also includes biometric data collected as part of a background check or state licensing requirement for many types of jobs, including, depending on the state, licensing to be a dentist, accountant, teacher, geologist, realtor, lawyer, or optometrist.39 Since 1953, all jobs with the federal government have also required a fingerprint check, no matter the salary range or level of responsibility.40

As of December 2017, NGI included more than 74 million biometric records in the criminal repository and over 57.5 million records in the civil repository.41 By the end of fiscal year 2016, it also already contained more than 51 million civil and criminal photographs searchable through face recognition.42

The states have been very involved in the development and use of the NGI database. NGI includes more than 20 million civil and criminal images received directly from at least six states, including California, Louisiana, Michigan, New York, Texas, and Virginia. Five additional states—Florida, Maryland, Maine, New Mexico, and Arkansas—can send search requests directly to the NGI database. As of December 2015, FBI was working with eight more states to grant them access to NGI, and an additional 24 states were also interested.43

In 2015, FBI announced that for the first time it would link almost all of the non-criminal data in NGI with criminal data as a “single identity record.”44 This means that, if a person submits fingerprints as part of their job search, those prints will be retained by FBI and searched, along with criminal prints, thousands of times a day45 as part of investigations into any crime by more than 23,000 law enforcement agencies across the country and around the world.46

For the IPS, FBI has said—for now—that it is keeping non-criminal photographs separate from criminal photographs.47 However, if a person is ever arrested for any crime—even for something as minor as blocking a street as part of a First Amendment-protected protest—their non-criminal photographs will be combined with their criminal record and will become fair game for the same face recognition searches associated with any criminal investigation.48 As of December 2015, over 8 million civil records were also included in the criminal repository.49

FBI Access to External Face Recognition Databases

FBI has been seeking broader access to external face recognition databases, like state DMV databases, since before its NGI IPS program was fully operational.50 It revealed some information about its program in mid-2015.51 However, the full scope of that access was not revealed until the GAO issued its report over a year later.52

The GAO report disclosed for the first time that FBI had access to over 400 million face recognition images—hundreds of millions more than journalists and privacy advocates had been able to estimate before that. According to the GAO report, the FBI’s FACE (Facial Analysis, Comparison, and Evaluation) Services Unit not only had access to the NGI face recognition database of nearly 30 million civil and criminal mugshot photos,53 but it also had access to the State Department’s visa and passport databases, the Defense Department’s biometric database, and the driver’s license databases of at least 16 states. Totaling 411.9 million images, this is an unprecedented number of photographs, most of which were collected under civil and not criminal circumstances.

Under never-disclosed agreements between FBI and its state and federal partners,54 FBI may search these civil photos whenever it is trying to find a suspect in a crime. And FBI has been searching its external partner databases extensively; between August 2011 and December 2015, FBI requested nearly 215,000 searches of external partners’ databases.55 As of December 2017, FBI’s FACE Services Unit was conducting more than 7,000 searches per month—2,200 more searches per month than the same month a year prior.56

Failure to Address Accuracy Problems

FBI has done little to ensure its face recognition search results (which the Bureau calls “investigative leads”) do not implicate innocent people. According to the GAO report and FBI’s responses to EFF’s Freedom of Information Act requests,57 FBI has conducted only very limited testing to ensure the accuracy of NGI’s face recognition capabilities. Further, it has not taken any steps to determine whether the face recognition systems of its external partners—states and other federal agencies—are sufficiently accurate to prevent innocent people from being identified as criminal suspects.

FBI admits its system is inaccurate, noting in its Privacy Impact Assessment (PIA) for the IPS that it “may not be sufficiently reliable to accurately locate other photos of the same identity, resulting in an increased percentage of misidentifications.”58 However, FBI has disclaimed responsibility for accuracy in its face recognition system, stating that “[t]he candidate list is an investigative lead not an identification.”59 Because the system is designed to provide a ranked list of candidates, FBI has stated the IPS never actually makes a “positive identification,” and “therefore, there is no false positive rate.”60 In fact, FBI only ensures that “the candidate will be returned in the top 50 candidates” 85 percent of the time “when the true candidate exists in the gallery.”61 It is unclear what happens when the “true candidate” does not exist in the gallery, however. Does NGI still return possible matches? Could those people then be subject to criminal investigation for no other reason than that a computer thought their face was mathematically similar to a suspect’s?

The GAO report criticizes FBI’s cavalier attitude regarding false positives, noting that “reporting a detection rate without reporting the accompanying false positive rate presents an incomplete view of the system’s accuracy.”62 The report also notes that FBI’s stated detection rate may not represent operational reality because FBI only conducted testing on a limited subset of images and failed to conduct additional testing as the size of the database increased. FBI also has never tested to determine detection rates where the size of the responsive candidate pool is reduced to a number below 50.63

When false positives represent real people who may become suspects in a criminal investigation, the number of false positives a system generates is especially important.64

FBI’s face recognition programs involve multiple factors that will decrease accuracy. For example, face recognition performs worse overall as the size of the database increases, in part because so many people within a given population look similar to one another. At more than 50 million searchable photos so far,65 FBI’s face recognition system constitutes a very large database.

Face recognition is also extremely challenging at low image resolutions.66 EFF learned through documents FBI released in response to our 2012 FOIA request that the median resolution of images submitted through an IPS pilot program was “well-below” the recommended resolution of 3/4 of a megapixel.67 (In comparison, newer iPhone cameras are capable of 12 megapixel resolution.68) Another FBI document released to EFF noted that because “the trend for the quality of data received by the customer is lower and lower quality, specific research and development plans for low-quality submission accuracy improvement is highly desirable.”69

FBI claims it uses human examiners to review the system’s face recognition matches, but using humans to perform the final suspect identification from a group of photos provided by the system does not solve accuracy problems. Research has shown that, without specialized training, humans may be worse at identification than a computer algorithm. That is especially true when the subject is someone they do not already know or someone of a race or ethnicity different from their own.70 Many of the searches conducted in NGI are by state and local agencies. NGI provides search results to these agencies on a blind or “lights out” basis (i.e. no one at FBI reviews the results before they are provided to the agencies).71 It is unlikely the smaller agencies will have anyone on staff who is appropriately trained to review these search results, so misidentifications are very likely to occur.

Failure to Produce Basic Information about NGI and its Use of Face Recognition as Required by Federal Law

Despite going live with NGI in increments since at least 2008, FBI has failed to release basic information about its system, including information mandated by federal law, that would have informed the public about what data FBI has been collecting and how that data is being used and protected.

The federal Privacy Act of 1974 and the E-Government Act of 2002 require agencies to address the privacy implications of any system that collects identifiable information on the public.72 The Privacy Act requires agencies to provide formal notice in the Federal Register about any new system that collects and uses Americans’ personal information.73 This notice, called a System of Records Notice (SORN) must describe exactly what data is collected and how it is being used and protected, and must be published with time for the public to comment. The E-Government Act requires agencies to conduct Privacy Impact Assessments (PIAs) for all programs that collect information on the public and notify the public about why the information is being collected, the intended use of the information, with whom the information will be shared, and how the information will be secured. PIAs should be conducted during the development of any new system “with sufficient lead time to permit final Departmental approval and public website posting on or before the commencement of any system operation (including before any testing or piloting.)”74

PIAs and SORNs are an important check against government encroachment on privacy. They allow the public to see how new government programs and technology affect their privacy and assess whether the government has done enough to mitigate the privacy risks. As the DOJ’s own guidelines on PIAs explain, “The PIA also . . . helps promote trust between the public and the Department by increasing transparency of the Department’s systems and missions.”75 As noted, they are also mandatory.76

FBI complied with these requirements when it began developing its face recognition program in 2008 by issuing a PIA for the program that same year. However, as the Bureau updated its plans for face recognition, it failed to update its PIA, despite calls from Congress and members of the privacy advocacy community to do so.77 It didn’t issue a new PIA until late 2015—a full year after the entire IPS was online and fully operational, and at least four years after FBI first started incorporating face recognition-compatible photographs into NGI.78 Before FBI issued the new PIA, it had already conducted over 100,000 searches of its database.79

FBI also failed to produce a SORN for the NGI system until 2016.80 For years FBI skirted the Privacy Act by relying on an outdated SORN from 1999 describing its legacy criminal database called IAFIS (Integrated Automatic Fingerprint Information System),81 which only included biographic information, fingerprints, and non-searchable photographs. Even FBI now admits that NGI contains nine “enhancements” that make it fundamentally different from the original IAFIS database that it replaces.82

The GAO report specifically faulted FBI for amassing, using, and sharing its face recognition technologies without ever explaining the privacy implications of its actions to the public. As GAO noted, the whole point of a PIA is to give the public notice of the privacy implications of data collection programs and to ensure that privacy protections are built into the system from the start. FBI failed to do this.

Unclear Scope

The public still does not have as much information as it should about FBI’s face recognition systems and FBI’s plans for their future evolution. For example, a Request for Proposals that FBI released in 2015 indicated the agency planned to allow law enforcement officers to use mobile devices to collect face recognition data out in the field and submit that data directly to NGI.83 By the end of 2017, state and local law enforcement officers from 29 states and the District of Columbia were already able to access certain FBI criminal records via mobile devices, and the Bureau has said it expects to expand access in 2018.84

As we have seen with state and local agencies that have already begun using mobile biometric devices, officers may use such devices in ways that push the limits of and in some cases directly contradict constitutional law. For example, in San Diego, where officers from multiple agencies use mobile devices to photograph people right on the street and immediately upload those images to a shared face recognition database, officers have pressured citizens to consent to having their picture taken.85 Regional law enforcement policy has also allowed collection based on First Amendment-protected activities like an “individual’s political, religious, or social views, associations or activities” as long as that collection is limited to “instances directly related to criminal conduct or activity.”86

From FBI’s past publications related to NGI,87 it is unclear whether FBI would retain the images collected with mobile devices in the NGI database. If it does, this would directly contradict 2012 congressional testimony where an FBI official said that “[o]nly criminal mug shot photos are used to populate the national repository.”88 A photograph taken in the field before someone is arrested is not a “mug shot.”

Part 3: Face Recognition Capabilities and Concerns On The Horizon

Law enforcement agencies are exploring other ways to take advantage of face recognition. For example, there is some indication FBI and other agencies would like to incorporate crowd photos and images taken from social media into their databases. A 2011 Memorandum of Understanding (MOU) between Hawaii and FBI shows that the government has considered “permit[ting] photo submissions independent of arrests.”89 It is not clear from the document what types of photos this could include, but FBI’s privacy-related publications about NGI and IPS90 leave open this possibility that FBI may plan to incorporate crowd or social media photos into NGI in the future. FBI’s most recent PIA notes that NGI’s “unsolved photo file” contains photographs of “unknown subjects,”91 and the SORN notes the system includes “biometric data” that has been “retrieved from locations, property, or persons associated with criminal or national security investigations.”92 Because criminal investigations may occur in virtual as well as physical locations, this loophole seems to allow FBI to include images collected from security cameras, social media accounts, and other similar sources.

At some point in the future, FBI may also attempt to populate NGI with millions of other non-criminal photographs. The GAO report notes FBI’s FACE Services Unit already has access to the IPS, the State Department’s Visa and Passport databases, the Defense Department’s biometric database, and the driver’s license databases of at least 16 states.93 However, the combined 412 million images in these databases may not even represent the full scope of FBI access to face recognition data today. When GAO’s report first went to press, it noted that FBI officials had stated FBI was in negotiations with 18 additional states to obtain access to their driver’s license databases.94 This information was kept out of later versions of the report, so it is unclear where these negotiations stand today. The later version of the report also indicates Florida does not share its driver’s license data with FBI, but Georgetown’s 2016 report on law enforcement access to state face recognition databases contradicts this; Georgetown found FBI field offices in Florida can search all driver’s license and ID photos in the state.95

FBI has hinted it has broader plans than these, however. FBI indicated in a 2010 presentation that it wants to use NGI to track people’s movements to and from “critical events” like political rallies, to identify people in “public datasets,” to “conduct[] automated surveillance at lookout locations,” and to identify “unknown persons of interest” from photographs.96 This suggests FBI wants to be able to search and identify people in photos of crowds and in pictures posted on social media sites—even if the people in those photos haven’t been arrested for or suspected of a crime.

While identifying an unknown face in a crowd in real time from a very large database of face images would still be particularly challenging,97 researchers in other countries claim they are well on the way to solving this problem. Recently, Russian developers announced that their system, called FindFace, could identify a person on the street with about 70 percent accuracy if that person had a social media profile.98 Law enforcement agencies in other countries are partnering with face recognition vendors to identify people from archived CCTV footage,99 and the United States National Institute of Standards and Technology (NIST), in partnership with the Department of Homeland Security, has sponsored research to assess the capability of face recognition algorithms to correctly identify people in videos.100 As NIST notes, use cases for this technology include “high volume screening of persons in the crowded spaces (e.g. an airport)” and “[l]ow volume forensic examination of footage from a crime scene (e.g. a convenience store).” While NIST recognizes the ability to recognize “non-cooperative” people in video is still incredibly challenging, it notes, “Given better cameras, better design, and the latest algorithm developments, recognition accuracy can advance even further.”101 In fact, face recognition vendors are already working with large events organizers to identify people in real time at sports events in the United States and abroad.102

Police officers are also increasingly interested in using face recognition with body-worn cameras, despite the clear security risks and threats to privacy posed by such systems.103 A U.S. Department of Justice-sponsored 2016 study found that at least nine of 38 manufacturers currently include face recognition in body-worn cameras or are making it possible to include in the future.104 Some of these body-worn camera face recognition systems allow cameras to be turned on and off remotely and allow camera feeds to be monitored back at the station.105As we have seen with other camera systems, remote access and control increases the security risk that bad actors could hijack the feed or that the data could be transmitted in the clear to anyone who happened to intercept it.106

Adding face recognition to body-worn cameras would also undermine the primary original purposes of these tools: to improve police interactions with the public and increase oversight and trust of law enforcement. People are much less likely to seek help from the police if they know or suspect not only that their interactions are not being recorded, but also that they can be identified in real time or in the future. This also poses a grave threat to First Amendment-protected speech and the ability to speak anonymously, which has been recognized as a necessity for a properly-functioning democracy since the birth of the United States.107 Police officers are almost always present at political protests in public places and are increasingly wearing body-worn cameras while monitoring activities. Using face recognition would allow officers to quickly identify and record specific protesters, chilling speech and discouraging people who are typically targeted by police from participating. Face recognition on body-worn cameras will also allow officers to covertly identify and surveil the public on a scale we have never seen before.

Near-future uses of face recognition may also include identifying people at night in the dark,108 projecting what someone will look like later in life based on how they look as a child,109 and generating a photograph-like image of person from a police sketch or even from a sample of DNA.110 Researchers are also developing ways to apply deep learning and artificial intelligence to improve the accuracy and speed of face recognition systems.111 Some claim these advanced systems may in the future be able to detect such private information as sexual orientation, political views, high IQs, a predisposition to criminal behavior, and specific personality traits.112

Near-future uses include generating a photograph-like image of a person from a sketch. Source: Center for Identification Technology Research.

Face recognition does not work without databases of pre-collected images. The federal government and state and local law enforcement agencies are working hard to build out these databases today, and NIST is sponsoring research in 2018 to measure advancements in the accuracy and speed of face recognition identification algorithms that search databases containing at least 10 million images.113 This means the time is ripe for new laws to prevent the overcollection of images in the future and to place severe limits on the use of images that already exist.

Part 4: Proposals for Change

The over-collection of face recognition data has become a real concern, but there are still opportunities—both technological and legal—for change. Transparency, accountability, and strict limits on use are critical to ensuring that face recognition not only comports with constitutional protections but also preserves democratic values.

Legislation is an important option for addressing these issues, and the federal government’s response to two seminal wiretapping cases in the late 1960s could be used as a model for face recognition legislation today.114In the wake of Katz v. United States115 and New York v. Berger,116the federal government enacted the Wiretap Act,117which lays out specific rules that govern federal wiretapping, including the evidence necessary to obtain a wiretap order, limits on a wiretap’s duration, reporting requirements, a notice provision, and also a suppression remedy that anticipates wiretaps may sometimes be conducted unlawfully.118 Since then, law enforcement’s ability to wiretap a suspect’s phone or electronic device has been governed primarily by statute rather than Constitutional case law.

Legislators could also look to the Video Privacy Protection Act (VPPA).119 Enacted in 1988, the VPPA prohibits the “wrongful disclosure of video tape rental or sale records” or “similar audio-visual materials,” requires a warrant before a video service provider may disclose personally identifiable information to law enforcement, and includes a civil remedies enforcement provision.

Although some believe that Congress is best positioned to ensure that appropriate safeguards are put in place for technologies like face recognition, Congress has been unable to make non-controversial updates to existing law enforcement surveillance legislation,120 much less enact new legislation. For that reason, the best hope at present is that states will fill the void, as several states have already in other contexts by passing legislation that limits surveillance technologies like location and communications tracking.121

Legislators and regulators considering limits on the use of face recognition should keep the following nine principles in mind to protect privacy and security.122 These principles are based in part on key provisions of the Wiretap Act and VPPA and in part on the Fair Information Practice Principles (FIPPs), an internationally-recognized set of privacy protecting standards.123 The FIPPs predate the modern Internet but have been recognized and developed by government agencies in the United States, Canada, and Europe since 1973, when the United States Department of Health, Education, and Welfare released a seminal report on privacy protections in the age of data collection called Records, Computers, and the Rights of Citizens.124

Limit the Collection of Data

The collection of face recognition data should be limited to the minimum necessary to achieve the government’s stated purpose. For example, the government’s acquisition of face recognition from sources other than directly from the individual to populate a database should be limited. The government should not obtain face recognition data en masse to populate its criminal databases from sources where the biometric was originally acquired for a non-criminal purpose (such as state DMV records), or from crowd photos or data collected by the private sector. Techniques should also be employed to avoid over-collection of face prints (such as from security cameras or crowd photos) by, for example, scrubbing the images of faces that are not central to an investigation. The police should not retain “probe” images—that is, images of unidentified individuals—or enter them into a database. Agencies should also not retain the results of image searches, except for audit purposes.

Define Clear Rules on the Legal Process Required for Collection

Face recognition should be subject to clear rules on when it may be collected and which specific legal processes—such as a warrant based on probable cause—are required prior to collection. Collection and retention should be specifically disallowed without legal process unless the collection falls under a few very limited and defined exceptions. For example, clear rules should define when, if ever, law enforcement and other agencies may collect face recognition images from the general public without their knowledge.

Limit the Amount and Type of Data Stored and Retained

A face print can reveal much more information about a person than his or her identity, so rules should be set to limit the amount of data stored. Retention periods should be defined by statute and limited in time, with a high priority on deleting data. Data that is deemed to be “safe” from a privacy perspective today could become highly identifying tomorrow. For example, a dataset that includes crowd images could become much more privacy-invasive as identification technology improves. Similarly, data that is separate and siloed or unjoinable today might be easily joinable tomorrow. For this reason retention should be limited, and there should be clear and simple methods for a person to request removal of his or her biometric from the system if, for example, they have been acquitted or are no longer under investigation.125

Limit the Combination of More than One Biometric in a Single Database

Different biometric data sources should be stored in separate databases. If a face template needs to be combined with other biometrics, that should happen on an ephemeral basis for a particular investigation. Similarly, biometric data should not be stored together with non-biometric contextual data that would increase the scope of a privacy invasion or the harm that would result if a data breach occurred. For example, combining face recognition images or video from public cameras with license plate information increases the potential for tracking and surveillance. This should be avoided, or limited to specific individual investigations.

Define Clear Rules for Use and Sharing

Biometrics collected for one purpose should not be used for another. For example, face prints collected in a non-criminal context, such as for a driver’s license or to obtain government benefits, should not be shared with law enforcement—if they are shared at all—without strict legal process. Similarly, face prints collected for use in an immigration context, such as to obtain a visa, should not automatically be used or shared with an agency to identify a person in a criminal context. Face recognition should only be used—if it is used at all—under extremely limited circumstances after all other investigative options have been exhausted. It should not be used to identify and track people in real time without a warrant that contains specific limitations on time and scope. Additionally, private sector databases should not only be required to obtain user consent before enrolling people into any face recognition system, they should also be severely restricted from sharing their data with law enforcement.

Enact Robust Security Procedures to Minimize the Threat of Imposters on the Front End and Avoid Data Compromise on the Back End

Because most biometrics cannot easily be changed, and because all databases are inherently vulnerable to attack, data compromise is especially problematic. The use of traditional security procedures is paramount, such as implementing basic access controls that require strong passwords, limiting access privileges for most employees, excluding unauthorized users, and encrypting data transmitted throughout the system. On top of that, security procedures specific to biometrics should also be enacted to protect the data. For example, data should be anonymized or stored separate from personal biographical information. Strategies should also be employed at the outset to pre-emptively counter data compromise and to prevent digital copies of biometrics. Biometric encryption126 or “hashing” protocols that introduce controllable distortions into the biometric before matching can reduce the risk of problems later. The distortion parameters can easily be changed to make it technically difficult to recover the original privacy-sensitive data from the distorted data, should the data ever be breached or compromised.127

Mandate Notice Procedures

Because of the risk that face prints will be collected without a person’s knowledge, rules should define clear notice requirements to alert people to the fact that a face print has been collected. The notice should also make clear how long the data will be stored and how to request its removal from the database.

Define and Standardize Audit Trails and Accountability Throughout the System

All database transactions—including face recognition input, access to and searches of the system, data transmission, etc.—should be logged and recorded in a way that ensures accountability. Privacy and security impact assessments, including independent certification of device design and accuracy, should be conducted regularly.

Ensure Independent Oversight

Government entities that collect or use face recognition must be subject to meaningful oversight from an independent entity. Individuals whose data are compromised by the government or the private sector should have strong and meaningful avenues to hold them accountable.

Conclusion

Face recognition and its accompanying privacy and civil liberties concerns are not going away. Given this, it is imperative that government act now to limit unnecessary data collection; instill proper protections on data collection, transfer, and search; ensure accountability; mandate independent oversight; require appropriate legal process before collection and use; and define clear rules for data sharing at all levels. This is crucial to preserve the democratic and constitutional values that are the bedrock of American society.

Acronyms and Useful Terms

Face template – The data that face recognition systems extract from a photograph to represent a particular face. This data consists of specific, distinctive details about a person’s face, such as the distance between the eyes or the shape of the chin, converted into a mathematical representation. A face template is distinct from the original photograph because it is designed to only include certain details that can be used to distinguish one face from another. This may also be called a “face print.”

False negative – The result when a face recognition system fails to match a person’s face to an image that is contained in the database.

False positive – The result when a face recognition system matches a person’s face to an image in the database, but that match is incorrect.

FAR – False accept rate. This is the number of false positives a system produces.

FRR – False reject rate. This is the number of false negatives a system produces.

Gallery – The entire database of face recognition data against which searches are conducted.

Gallery of candidate photos – The list of photos a face recognition system produces as potential matches in response to a search. For example, when a law enforcement agency submits a photo of a suspect to find matches in a mugshot database, the list of potential matches from the repository is called the gallery of candidate photos.

GAO – Government Accountability Office.

IPS – Interstate Photo System. The part of the NGI that contains photographs searchable through face recognition.

NGI – Next Generation Identification. The NGI database is a massive biometric database that includes fingerprints, iris scans, and palm prints collected from millions of individuals not just as part of an arrest, but also for non-criminal reasons like background checks, state licensing requirements, and immigration.

OPM – Office of Personnel Management.

PIA – Privacy Impact Assessment.

Probe photo – The photo against which a face recognition system is searched. For example, a law enforcement agency might submit a “probe photo” of an unidentified suspect to search for potential matches in a mugshot database.

Julie Hirschfeld Davis, Hacking of Government Computers Exposed 21.5 Million People, N.Y. Times (July 9, 2015), http://www.nytimes.com/2015/07/10/ us/office-of-personnel-management-hackers-got-data-of-millions.html; See also, e.g., David Stout and Tom Zeller Jr., Vast Data Cache About Veterans Is Stolen, N.Y. Times (May 23, 2006), https://www.nytimes.com/2006/05/23/washington/23identity.html; See also MEPs question Commission over problems with biometric passports, European Parliament News (Apr. 19, 2012), http://www.europarl.europa.eu/news/en/headlines/content/20120413STO42897/html/MEPs-question-Commission-over-problems-with-biometric-passports (noting that, at the time, “In France 500,000 to 1 million of the 6.5 million biometric passports in circulation are estimated to be false, having been obtained on the basis of fraudulent documents”).

Simon Davies, Little brother is watching you, Independent (Aug. 25, 1998) https://www.independent.co.uk/arts-entertainment/little-brother-is-watching-you-1174115.html (Researchers found that “10 per cent of the time spent filming women was motivated by voyeurism.” One researcher noted, “It is not uncommon for operators to make `greatest hits’ compilations.”); Man jailed for eight months for spying on woman with police camera, TheJournal.ie (Sept. 26, 2014), http://www.thejournal.ie/cctv-police-spying-woman-1693080-Sep2014.

This number is now more than 50,000. FBI, CJIS Annual Report 2016, supra note 42.

FBI has not released these agreements.

GAO Report, supra note 37, at 10.

See December 2017 NGI Monthly Fact Sheet, supra note 41.

See GAO Report, supra note 37, at 26-27; Jennifer Lynch, FBI Plans to Have 52 Million Photos in its NGI Face Recognition Database by Next Year, and accompanying documents. https://www.eff.org/deeplinks/2014/04/fbi-plans-have-52-million-photos-its-ngi-face-recognition-database-next-year.

See Lynch, FBI Plans to Have 52 Million Photos in its NGI Face Recognition Database by Next Year, supra note 57. The FBI has also noted that because “this is an investigative search and caveats will be prevalent on the return detailing that the [non-FBI] agency is responsible for determining the identity of the subject, there should be NO legal issues.” Id.

Id.

Id.

GAO Report, supra note 37, at 27.

GAO Report, supra note 37, at 26.

Security researcher Bruce Schneier has noted that even a 90 percent accurate system “will sound a million false alarms for every real terrorist” and that it is “unlikely that terrorists will pose for crisp, clear photos.” Bruce Schneier, Beyond Fear: Thinking Sensibly About Security in an Uncertain World, 190 (2003).

CJIS Annual Report 2016, 16, supra note 42.

See, e.g., Min-Chun Yang, et al., supra note 6.

See Lynch, FBI Plans to Have 52 Million Photos in its NGI Face Recognition Database by Next Year, supra note 57.

EFF and other organizations called for years on FBI to release more information about NGI and how it impacts people’s privacy. See, e.g., Testimony of Jennifer Lynch to the Senate Committee on the Judiciary Subcommittee on Privacy, Technology, and the Law, EFF (July 18, 2012), https://www.eff.org/document/testimony-jennifer-lynch-senate-committee-judiciary-subcommittee-privacy-technology-and-law; Letter to Attorney General Holder re. Privacy Issues with FBI’s Next Generation Identification Database, EFF (June 24, 2014), https://www.eff.org/document/letter-attorney-general-holder-re-privacy-issues-fbis-next-generation-identification.

Compare map of states sharing data with FACE Services on page 51 of the GAO Report, supra note 37, with map available in original version of Report, https://www.eff.org/deeplinks/2016/06/fbi-can-search-400-million-face-recognition-photos.

Byron Spice, Finding Faces in a Crowd, Carnegie Mellon U. (Mar. 30, 2017) https://www.cmu.edu/news/stories/archives/2017/march/faces-in-crowd.html; Introna & Nissenbaum, supra note 18 (concluding that, given lighting and other challenges, as well as the fact that so many people look like one another, it is unlikely that face recognition systems with high accuracy rates under these conditions will become an “operational reality for the foreseeable future”).

It is unclear at what resolution and distance the probe photos were taken and how many images of each person were available to compare the probe photos against (more photographs taken from different angles and under different lighting conditions could increase the probability of a match). See, e.g., Ben Guarino, Russia’s new FindFace app identifies strangers in a crowd with 70 percent accuracy, Wash. Post (May 18, 2016) https://www.washingtonpost.com/news/morning-mix/wp/2016/05/18/russias-new-findface-app-identifies-strangers-in-a-crowd-with-70-percent-accuracy.

United States v. Jones, 565 U.S. 400, 427-28, 429 (2012) (Justice Alito, in his concurring opinion, specifically referenced post-Katz wiretap laws when he noted that, “[i]n circumstances involving dramatic technological change, the best solution to privacy concerns may be legislative”).

Katz v. United States, 389 U.S. 347 (1967).

Berger v. New York, 388 U.S. 41 (1967) (striking down a state wiretapping law as facially unconstitutional. In striking down the law, the Court laid out specific principles that would make a future wiretapping statute constitutional under the Fourth Amendment).

Researchers at Georgetown have drafted model face recognition legislation that includes many of these principles. See Garvie, supra note 1, at https://www.perpetuallineup.org/sites/default/files/2016-10/Model%20Face%20Recognition%20Legislation.pdf.

See Privacy Act of 1974, 5 U.S.C. § 552a (2010); See also OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data, Org. for Econ. Co-operation and Dev., (1980), http://www.oecd.org/document/18/0,3343,en_2649_34255_1815186_1_1_1_1,00.html. The full version of the FIPPs, as used by DHS, includes eight principles: Transparency, Individual Participation, Purpose Specification, Data Minimization, Use Limitation, Data Quality and Integrity, Security, and Accountability and Auditing; See Hugo Teufel III, Chief Privacy Officer, DHS, Mem. No. 2008-01, Privacy Policy Guidance Memorandum (Dec. 29, 2008), http://www.dhs.gov/xlibrary/assets/privacy/privacy_policyguide_2008-01.pdf.

Available at https://www.justice.gov/opcl/docs/rec-com-rights.pdf.

For example, in S. and Marper v. United Kingdom, the European Court of Human Rights held that retaining cellular samples and DNA and fingerprint profiles of people acquitted or people who have had their charges dropped violated Article 8 of the European Convention on Human Rights. S. and Marper. v. United Kingdom, App. Nos. 30562/04 and 30566/04, 48 Eur. H.R. Rep. 50, 77, 86 (2009).

In a recent survey on telehealth, conducted by Baltimore-based healthcare research firm Sage Growth Partners (SGP) and inclusive of some 100 industry executives, about half of respondents said they have adopted telemedicine in some form, and of the non-adopters, most said that they see it as a priority. The survey findings also revealed that mobile apps and outpatient care are the “next frontier for telemedicine use.”

Indeed, hospitals across the U.S. are starting to embrace telemedicine initiatives more now—albeit at a slow space still—and in a healthcare landscape that is prioritizing cutting costs and keeping patients out of the hospital, this type of remote care has carved out a niche. At NewYork-Presbyterian Hospital, a New York City-based academic medical center, and at its affiliates, including Columbia University Irving Medical Center and Weill Cornell Medical Center, also based in New York City, leveraging telemedicine has become a priority. But as Peter Fleischut, M.D., senior vice president and chief transformation officer at NYP, contends, the institution’s digital health portfolio is inclusive of various virtual care offerings.

In 2016, NewYork-Presbyterian announced the rollout of NYP OnDemand, a new suite of digital health services that included an array of innovation initiatives, including: Digital Second Opinion, a service in which NYP specialists from both ColumbiaDoctors and Weill Cornell Medicine can offer their clinical expertise for second opinions to patients around the country through an online portal; Digital Consults, which connects patients at NYP’s regional network hospitals to NYP hospital specialists; a digital emergency and urgent care program (Express Care), in which visitors to the NewYork-Presbyterian/Weill Cornell ED have the option of a virtual visit through real-time video interactions with a clinician after having an initial triage and medical screening exam; and finally, Digital Follow-Up Appointments, which provides patients a virtual follow-up option, instead of asking patients to come back to the office in person.

Fleischut, who served as NewYork-Presbyterian’s chief innovation officer prior to being named senior vice president and chief transformation officer last May, says that NYP’s core vision was to build a comprehensive suite of telehealth services, rather than just one program. In that sense, the organization has succeeded; to date, there are more than 50 telehealth programs in all. And in total, there have been approximately 15,000 of these virtual care encounters to date, with the care being delivered by any one of 700 providers, Fleischut says.

“We have had 600 percent growth in the past year in telehealth,” he says. Taking just Express Care as an example, patients coming in could wait up to two-and-a-half hours from admission to discharge for an [ED] visit, but with Express Care, in that same window—admission to discharge—patients are seen in approximately 31 minutes. And this is with same levels of patient satisfaction and outcome,” Fleischut attests.

Peter Fleischut, M.D.

Of course, the Express Care program is meant only for patients with minor complaints, but in such cases, after ED patients go through triage—when a physician assistant or a nurse practitioner performs a medical screening exam—those who are judged to be in stable condition with no life-threatening injuries or symptoms are given the option of seeing an emergency room physician via a videoconference in a private room. Fleischut notes that even if the patient initially chooses video visits, he or she can still back out for any reason and switch to an in-person visit instead. “It really comes down to patient preference, but we find that patients prefer [the video visits] in many different [scenarios],” he says.

What’s more, NYP is also partnering with Weil Cornell and ColumbiaDoctors on a telepsych initiative. The motivation for this project, as Fleischut explains, is that in some of NYP’s hospitals—just like across the country—there simply is a shortage of qualified behavioral health specialists. As such, a patient can wait up to 24 hours to see a psychiatrist in certain hospitals. But now with the telepsych program, NYP allows for peer-to-peer visits and can connect patients to psychiatrists within an hour, says Fleischut. And that leads to reduced transfers and reduced admissions, he adds, also pointing out one recent case in which a telepsych patient was scheduled for an in-person follow-up encounter, but then called NYP and said he actually preferred doing the visit from home.

Furthermore, the same process applies with NYP neurologists; there merely aren’t enough experts available. Enter the organization’s telestroke program, which uses video conferencing and data sharing that allows for 24/7 coverage for acute stroke care with rapid evaluation by a neurologist. This can save up to 7 minutes of treatment time, or about 14 million brain cells, as approximately two million brain cells die every minute during a stroke. To this end, NYP also has a mobile stroke unit, in which ambulances are equipped with a CT scan machine to diagnose and treat the patient in the ambulance prior to coming into the hospital, Fleischut says.

Despite the success that NYP has had with this digital suite of services, Fleischut does note one specific challenge that he sees as a major obstacle right now. He gives an example of a patient who comes in, is seen by a provider, and then it’s determined that a follow-up visit is needed. In this case, that doctor has an established relationship with the patient, so if the patient goes back home, that provider can do a follow-up visit with him or her without any issue. But if the patient happens to cross state lines, that provider is no longer able to do a follow-up video visit with that patient; per telemedicine regulations, only a telephone follow-up would be permitted.

But Fleischut expressed frustration in this scenario since the technology (the video visit) is now innovative enough to the point in which it provides higher-quality care than a phone encounter. “Follow-ups are a major issue in healthcare; the non-compliance for follow-up can be as high as 40 percent. And now we have a simple way to do a high-quality follow-up, but due to regulatory challenges, it forces us into using a technology that’s not as high-quality,” he says.

Fleischut does make clear that he supports regulation that requires a doctor-patient relationship to be established before a virtual visit takes place. But in the example he gives, that relationship has already been established, and still, if the patient crosses state lines, problems arise. “Now we have the means and a technology to ensure higher compliance and higher-quality care, and what I think is the right care for the patient, but it’s a challenge—even though it’s your own patient,” he says.

Nonetheless, NYP is continuing to surge ahead in its telehealth and other virtual care initiatives. Fleischut points to a recent collaboration between NewYork-Presbyterian and Walgreens in which kiosks, located in private rooms inside some Walgreens and Duane Reade drugstores in New York, offer instant examination, diagnosis and treatment of non-life threatening illnesses and injuries though NYP OnDemand services. Here, patients can reach board-certified Weill Cornell Medicine emergency medicine physicians, who provide exams through an HD video-conference connection. At the end of the examination, if the physician writes a prescription, it can be instantly sent to the patient’s preferred pharmacy.

Fleischut opines that the next step is to ramp up remote patient monitoring (RPM) services, an innovation which he feels the industry is ready for. He also mentions the 2016 launch of NYP Ventures, a strategic investment fund that supports innovative digital healthcare companies. The venture arm of the organization just recently opened its second office in Silicon Valley. “We really don’t think about this as just telehealth,” Fleischut says. “We hone in on virtualization—and that’s everything from AI [artificial intelligence] to machine learning to robotic process automation. We feel that these are fundamental core tools that are needed in the future delivery of care.”

Traditional IT has to make way for the intelligence-based business model.

In 2003 Nick Carr declared that IT had become a ubiquitous commodity with no competitive advantage. Since then cloud computing has eliminated any remaining strategic value in traditional IT organizations.

Barriers to access are all but gone. Today the essential functions of business are serviced by free or inexpensive and easy-to-use tools. Competitors have equal access to all the same IT and share the same pool of IT talent, which is increasingly outsourced.

In the cloud era the cost of switching to new tools is primarily cultural, not financial. Today, choosing between Slack and Microsoft Teams is like toiling over the choice to splurge on 3M brand Post-It notes or getting a deal on the generics. Can you imagine conducting a TCO analysis to help you decide on Dunkin Donuts or Starbucks coffee for the break room?

You are not going to outrun your competitors because you purchased better office supplies. Speed is now the commodity we’re all bidding for. To win you need to produce radical efficiency gains, the likes of which haven’t been seen since businesses went digital in the first place. Those gains will be brought to you by artificial intelligence.

The way forward: Intelligence technology

At the same time, adopting AI is going to take time and money. These don’t have to be net-new expenditures. If you still have line items in your budget for IT, now is the time to direct most of that to AI. IT isn’t going to disappear as we know it now, but it is going to have to become part of a larger strategy, integrated into an intelligent system. Intelligence technologies may just be the saving grace for making information technologies strategically relevant again.

There’s massive opportunity right now for companies that can bridge the ambition-execution gap in AI. In a recent report the Boston Consulting Group and MIT Sloan found that “only about one in five companies has incorporated AI into some offerings or processes.” The majority of executives surveyed (60%) say “a strategy for AI is urgent for their organizations.” So why do only 50% actually have one in place?

AI is not a “nice to have” futuristic product feature. It’s a virtuous cycle of productivity that can collect, interpret and utilize data at a scale beyond human ability.

To maintain competitive advantage in the near term, you’ll have to go one step further and adopt an intelligence-technology framework, built on the IT foundation you already have. That means integrating AI-powered automation and prediction at scale, across all business units.

This isn’t magic; it’s work that can be executed in three simple (but not easy) steps.

Step 1: Clarify what’s working in your business and automate as much of that as possible. With new advancements, IT processes can be more and more integrated into an AI system, and automated.

Step 2: Use AI to collect and interpret data on what’s not working and why.

Step 3: Based on your analysis, make a prediction for what will work better. Automate the implementation to keep improving your machine learning systems.

As for your old systems, continuing to invest in IT for IT’s sake isn’t sustainable. Your customer database, for example, needs to be much more than just the database. Now it’s the foundation for data fueling AI. Systems such as payroll or time-tracking must be able to be integrated into a larger system, and not left to limp along behind.

Intelligence technology is the last frontier of productivity. It’s the fastest, easiest way to end dumb work, which, in turn, creates capacity for strategy and R&D.

Intelligence or bust

A lot more can be written about each of these three steps, but the most important thing is to get started. You don’t have time to conduct exhaustive research and build the perfect process before wading in. Work strategically and intentionally, but start now.

It’s going to be messy at first. To adopt a new framework for how to operate competitively and shift to intelligence technology will completely disrupt business as usual. This isn’t a shift relegated to one department. Operations and culture will have to be re-wired throughout your organization to make this a success. We’re talking about introducing a new teammate: Your people will be working with, not on, machines.

For this reason IT can’t just be owned by one department. Everyone at every level of the organization will need to familiarize themselves with the inner workings of a system that’s currently couched in hype and mystery.

How do we get our people to work together more efficiently within a complex organization? We get them to collaborate with one another and sit on the same side of the table. That cultural shift isn’t a “one and done” but an ongoing practice that takes time to adopt and adapt.

The same is true of integrating AI, but to an even greater extent. Customer service and marketing teams should interact with the AI as much, if not more, than your developers. Their insights and direct customer experience are instrumental in training intelligent systems. As members of your team engage directly with AI, they’ll see the value firsthand.

The net result in your organization will be the elimination of needlessly repetitive work. Just as IT originally transformed internal communications and manual labor to fuel the rise of information-based technologies and strategies, so too will AI transform the workplace and business, and IT, as we know it. There will be more space for strategy and R&D: the kind of work best done by humans. The sooner companies recognize this shift, the greater their advantage as AI grows from the IT we know today.

Ben Lamm is the co-founder and CEO of Conversable, a conversational intelligence platform. He was previously the the founder and CEO of Chaotic Moon Studios, a global creative technology powerhouse (acquired by Accenture), where he spearheaded the creation of some of the Fortune 500’s most groundbreaking digital products and experiences.

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT … View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.

Education assessments like the OECD’s Programme for International Student Assessment (PISA) leverage technology to improve the assessment and teaching of 21st century skills at large scale. But how is this useful when classrooms don’t readily have access to technology? How does this help teachers and students in their daily learning environments? In today’s society, where much of the attention centers on technology and innovation, unequal access to technology can mean unequal access to quality education. How can research and learnings from technology, computer-delivered assessments contribute to all classrooms?

We are aware—and at times, alarmed—by the amount and nature of data that are collected and used by search engines and social media sites. Notwithstanding some of the concerns about these uses of data capture, there is no doubt that electronic capture can be harnessed for good. For example, there is emerging evidence that the collection of fine-grained student activity data associated with educational assessments can contribute directly to improving the teaching and learning of important 21st century skills. Putting aside efficiency functions like automated scoring of tests, online assessment provides the ideal opportunity to capture data that is useful to teachers for improving teaching and learning.

Reading involves the decoding and comprehension of written text, but digital reading adds to this a new challenge: the ability to move around within hypertext environments by predicting the likely content of links and using tools such as tabs, hyperlinks, and browser buttons. So digital reading comprises two parts: text processing and navigation. In the same way that search engines or social media sites track our interactions, the PISA Digital Reading Assessment recorded every mouse click and every keystroke, making it a simple matter to examine students’ navigational pathways closely.

One very simple indicator of navigation behavior is whether students visit the page (or pages) that contain the information needed to solve a task successfully. There are powerful benefits in being able to group students according to how they interact with material and whether they were able to solve tasks. Imagine that four students (Students A, B, C, and D) complete an assessment of digital reading. For illustrative purposes, consider that we look closely at the responses of these four students to a single assessment item. Student A seems to struggle both to move around effectively within a set of hyperlinked websites and to answer the assessment item correctly. Student B answers the item correctly, but without showing the ability to navigate well. Student C shows the ability to navigate successfully, but is not able to solve the task correctly. Finally, Student D navigates to the page containing the information needed and is able to solve the task.

Making these distinctions has clear and critical implications for both the practice of assessment development and classroom teaching. Consider Student B. This student answered an assessment item correctly, but without being able to locate the target information. Logically speaking, this situation is nonsense, and we can conclude that Student B guessed the correct response. For assessment developers, treating Student B in the same way as Student A, the student who lacked ability in both text processing and navigation, leads to improvements in important test properties such as reliability and validity.

At the level of the classroom, the comparisons between these four hypothetical students are just as informative: teachers can compare students A and C, for example. Neither student solved the task correctly, so if using conventional scoring methods these students would have been treated in the same way. However, by tracking their navigational pathways, we can ascertain that unlike student A, student C did navigate successfully to the page containing the target information. It follows that the nature of intervention for these two students should be different. While student A appears to need assistance with both text processing and navigation, for student C, the focus should be on the latter.

So how do we support teachers in identifying what indicators to look for when students are completing classroom tasks? As the example from digital reading shows, one of the major ways that online data capture and analysis can inform teaching and learning is through the identification of patterns of student thinking and behavior. Teachers can use this information in their strategic design of tasks, and in heightening their awareness of the different approaches students take when engaging with tasks. So although many classrooms do not have access to technologies, the learnings from digital environments can be applied effectively.

BEIJING—In a gleaming high-rise here in northern Beijing’s Haidian district, two hardware jocks in their 20s are testing new computer chips that might someday make smartphones, robots, and autonomous vehicles truly intelligent. A wiry young man in an untucked plaid flannel shirt watches appraisingly. The onlooker, Chen Yunji, a 34-year-old computer scientist and founding technical adviser of Cambricon Technologies here, explains that traditional processors, designed decades before the recent tsunami of artificial intelligence (AI) research, “are slow and energy inefficient” at processing the reams of data required for AI. “Even if you have a very good algorithm or application,” he says, its usefulness in everyday life is limited if you can’t run it on your phone, car, or appliance. “Our goal is to change all lives.”

In 2012, the seminal Google Brain project required 16,000 microprocessor cores to run algorithms capable of learning to identify a cat. The feat was hailed as a breakthrough in deep learning: crunching vast training data sets to find patterns without guidance from a human programmer. A year later, Yunji and his brother, Chen Tianshi, who is now Cambricon’s CEO, teamed up to design a novel chip architecture that could enable portable consumer devices to rival that feat—making them capable of recognizing faces, navigating roads, translating languages, spotting useful information, or identifying “fake news.”

Developers hope artificial intelligence–optimized chips like the Cambricon-1A will enable mobile devices to learn on their own

SHAN HE—IMAGINECHINA VIA AP IMAGES

Tech companies and computer science departments around the world are now pursuing AI-optimized chips, so central to the future of the technology industry that last October Sundar Pichai, CEO of Google in Mountain View, California, told The Verge that his guiding question today is: “How do we apply AI to rethink our products?” The Chen brothers are by all accounts among the leaders; their Cambricon-1A chip made its commercial debut last fall in a Huawei smartphone billed as the world’s first “real AI phone.” “The Chen brothers are pioneering in terms of specialized chip architecture,” says Qiang Yang, a computer scientist at Hong Kong University of Science and Technology (HKUST) in China.

Such groundbreaking advances far from Silicon Valley were hard to imagine only a few years ago. “China has lagged behind the U.S. in cutting-edge hardware design,” says Paul Triolo, an analyst at the Eurasia Group in Washington, D.C. “But it wants to win the AI chip race.” The country is investing massively in the entire field of AI, from chips to algorithms. The Chen brothers, for example, developed their chip while working at the Institute of Computing Technology of the Chinese Academy of Sciences here, and the academy backed them with seed funding when they spun out Cambricon in 2016. (The company is now worth $1 billion.)

Last summer, China’s State Council issued an ambitious policy blueprint calling for the nation to become “the world’s primary AI innovation center” by 2030, by which time, it forecast, the country’s AI industry could be worth $150 billion. “China is investing heavily in all aspects of information technology,” from quantum computing to chip design, says Raj Reddy, a Turing Award–winning AI pioneer at Stanford University in Palo Alto, California, and Carnegie Mellon University in Pittsburgh, Pennsylvania. “AI stands on top of all these things.”

In recent months, the central government and Chinese industry have been launching AI initiatives one after another. In one of the latest moves, China will build a $2.1 billion AI technology park in Beijing’s western suburbs, the state news service Xinhua reported last month. Whether that windfall will pay off for the AI industry may not be clear for years. But the brute numbers are tilting in China’s favor: The U.S. government’s total spending on unclassified AI programs in 2016 was about $1.2 billion, according to In-Q-Tel, a research arm of the U.S. intelligence community. Reddy worries that the United States is losing ground. “We used to be the big kahuna in research funding and advances.”

Closing the intelligence gap

The United States leads China in private investment in artificial intelligence (AI) and in the number and experience of its scientists. But Chinese firms may gain an advantage from having more data—including data not in the public domain—for honing algorithms.

United States

China

Years experience of the nation’s data scientists

More than half have more than 10 years.

Forty percent have less than 5 years.

AI patent applications, 2010–2014

15,317 (First in world)

8410 (Second)

Number of workers in AI positions

850,000 (First)

50,000 (Seventh)

Percent of private AI investment (2016)

66% (First)

17% (Second)

Global ranking of data openness

No. 8

No. 93

DATA: ASTAMUSE; LINKEDIN; MCKINSEY GLOBAL INSTITUTE

China’s advantages in AI go beyond government commitment. Because of its sheer size, vibrant online commerce and social networks, and scant privacy protections, the country is awash in data, the lifeblood of deep learning systems. The fact that AI is a young field also works in China’s favor, argues Chen Yunji, by encouraging a burgeoning academic effort that has put China within striking distance of the United States, long the leader in AI research. “For traditional scientific fields, Chinese [scientists] have a long way to go to compete with the U.S. or Europe. But for computer science, it’s a relatively new thing. Young people can compete. Chinese can compete.” In an editorial last week in The Boston Globe, Eric Lander, president of the Broad Institute in Cambridge, Massachusetts, warned that the United States has at best a 6-month lead over China in AI. “China played no role in launching the AI revolution, but is making breathtaking progress catching up,” he wrote.

The fierce global competition in AI has downsides. University computer science departments are hollowing out as companies poach top talent. “Trends come and go, but this is the biggest one I’ve ever seen—a professor can go into industry to make $500,000 to $1 million” a year in the United States or China, says Michael Brown, a computer scientist at York University in Toronto, Canada.

In a more insidious downside, nations are seeking to harness AI advances for surveillance and censorship, and for military purposes. China’s military “is funding the development of new AI-driven capabilities” in battlefield decision-making and autonomous weaponry, says Elsa Kania, a fellow at the Center for a New American Security in Washington, D.C. In the field of AI in China, she warned in a recent report, “The boundaries between civilian and military research and development tend to become blurred.”

The Chinese government has begun using facial scans to identify pedestrians and jaywalkers.

REX FEATURES VIA AP IMAGES

Just as oil fueled the industrial age, data are fueling advances of the AI age. Many practical AI advances are “more about having a large amount of continually refreshed data and good-enough AI researchers who can make use of that data, rather than some brilliant AI theoretician who doesn’t have as much data,” says computer scientist Kai-Fu Lee, founder of Sinovation Ventures, a venture capital firm here. And China, as The Economist recently put it, is “the Saudi Arabia of data.”

Every time someone enters a search query into Baidu (China’s Google), pays a restaurant tab with WeChat wallet, shops on Taobao (China’s Amazon), or catches a ride with Didi (China’s Uber), among a plethora of possibilities, those user data can be fed back into algorithms to improve their accuracy. A similar phenomenon is happening in the United States, but China now has 751 million people online, and more than 95% of them access the internet using mobile devices, according to the China Internet Network Information Center. In 2016, Chinese mobile payment transactions totaled $5.5 trillion, about 50 times more than in the United States that year, estimates iResearch, a consulting firm in Shanghai, China.

Baidu, which runs China’s dominant search engine, both gathers and exploits much of these data. In parking garages under its futuristic glass-and-steel complex in northern Beijing, cars crowned with LIDAR sensors troll around on test runs for collecting mapping data that will feed Baidu’s autonomous driving lab. In the main lobby, staffers’ faces are scanned to open the security gates. Of China’s tech titans—Baidu, Alibaba, and Tencent—Baidu was the first to pour resources into AI. It now employs more than 2000 AI researchers, including staff in California and Seattle, Washington.

A few years ago, Baidu added an AI-powered image search to its mobile app, allowing a user to snap a photo of a piece of merchandise for the search engine to identify, and then look up price and store information.

Early object recognition programs focused on outlines. But many objects—for example, plates of food in a restaurant—have basically the same outline. What’s needed is more precise detection of interior patterns, or “textures,” says Feng Zhou, a data scientist in Cupertino, California, who heads Baidu’s new Fine-Grained Image Recognition Lab. Now, Baidu’s AI image search can distinguish between, for instance, a stewed tofu dish called mapo tofu and fried tofu dishes. (A U.S. equivalent might be detecting the difference between oatmeal and rice.) Better algorithms have helped, Zhou says, but so has an abundance of training data uploaded by internet users.

The data deluge is also transforming academia. “When the AI textbooks were written, we didn’t have access to that kind of data,” Yang says. “About 5 years ago, we decided that classroom education was not sufficient. We needed to have partnerships with industry, because the big technology companies not only have lots and lots of data, but also a variety of data sources and many interesting contexts to apply AI.” Today, a group of HKUST professors and Ph.D. students work on AI projects with Tencent, China’s social media giant. They have access to data from WeChat, the company’s ubiquitous social network, and are developing “intelligent” chat capabilities for everything from customer service to Buddhist spiritual advice.

Such collaborations are vulnerable, however, as China’s academic outposts struggle to keep faculty members capable of designing new AI algorithms from decamping to industry. “University students know that AI is a very cool thing, which might also make you rich,” Chen Yunji says.

The Chinese government is also drinking from the data firehose—and is honing AI as a tool for staying in power. The State Council’s AI road map explicitly acknowledges AI’s importance to “significantly elevate the capability and level of social governance, playing an irreplaceable role in effectively maintaining social stability.”

Some worry that the government’s embrace of AI could further stifle dissent in China. Enhanced technology for recognizing context and images allows for more effective real-time censorship of online communications, according to a report from The Citizen Lab, a research outfit at the University of Toronto. Also at the heart of this debate is facial recognition technology, which is powered by AI algorithms that analyze minute details of a person’s face in order to pick it out from among thousands or millions of potential matches.

People in China can now use facial scans to authorize digital payments at some fast food restaurants.

JIN KE—IMAGINECHINA

Facial recognition is now used routinely in China for shopping and to access some public services. For example, at a growing number of Kentucky Fried Chicken restaurants in China, customers can authorize digital payment by facial scan. Baidu’s facial recognition systems confirm passenger identity at certain airport security gates. Recent AI advances have made it possible to identify individuals not only in up-close still photos, but also in video—a far more complex scientific task.

China’s attitude toward such advances contrasts with the U.S. response. When the U.S. Customs and Border Protection last May revealed plans to use facial matching to verify the identities of travelers on select flights leaving the United States, a public debate erupted. In an analysis, Jay Stanley of the American Civil Liberties Union in Washington, D.C., warned of the potential for “mission creep”: With new AI technologies, “you can subject thousands of people an hour to face recognition when they’re walking down the sidewalk without their knowledge, let alone permission or participation.”

In China the government is already deploying facial recognition technology in Xinjiang, a Muslim-majority region in western China where tensions between ethnic groups erupted in deadly riots in 2009. Reporters from The Wall Street Journal who visited the region late last year found surveillance cameras installed every hundred meters or so in several cities, and they noted facial recognition checkpoints at gas stations, shopping centers, mosque entrances, and elsewhere. “This is the kind of thing that makes people in the West have nightmares about AI and society,” says Subbarao Kambhampati, president of the Association for the Advancement of Artificial Intelligence (AAAI) in Palo Alto and a computer scientist at Arizona State University in Tempe. In China, he says, “people are either not worried, or not able to have those kinds of conversations.”

Even toilet paper in public restrooms is now being dispensed, in limited amounts, after a facial scan.

WANG ZHAO/AFP/GETTY IMAGES

China’s AI researchers show no signs of slowing down. In October 2016, a White House report found that Chinese researchers now publish more deep learning–related papers in all journals than researchers from any other country. When adjusted for publication impact factor, the United States still produced the most influential AI-related papers, followed by the United Kingdom, with China only narrowly behind, according to a recent McKinsey Global Institute analysis.

Kambhampati adds that before 2012 or so, submissions from China to major AI conferences “used to be quite small.” At AAAI’s annual meeting earlier this week in New Orleans, Louisiana, he says, accepted papers from China nearly equaled those from the United States. “For the longest time, there was a general feeling that China was always second-rate in technology. That may have been true, but it’s also changing quite quickly.”

The government wants the boom to continue. At the end of 2017, the science ministry issued a 3-year plan to guide AI development, and named several large companies as “national champions” in key fields: for example, Baidu in autonomous driving, and Tencent in computer vision for medical diagnosis. Zha Hongbin, a professor of machine intelligence at Peking University here who consults for the government, says China plans to expand the number of universities offering dedicated machine learning and AI departments.

In the meantime, industry continues to bet heavily on AI. Last October, for instance, Alibaba announced plans to invest $15 billion in research over 3 years to build seven labs in four countries that will focus on quantum computing and AI.

A decade ago, China’s best AI researchers might have left for plum jobs in Silicon Valley. Instead, increasing numbers of them are staying at home to lift the nation’s AI industry, says Xia Yan, a 30-year-old data scientist who co-founded Momenta, an autonomous driving startup here. “Many of us are choosing to go from an academic background to running a company,” Xia says. “We want to see our work in the real world. It’s a new era.”

With an ongoing digital transformation taking place across healthcare, health system executive leaders are increasingly investing in IT and innovative technologies to meet clinical and operational goals.

Yet, at the same time a survey by the American College of Healthcare Executives found that healthcare CEOs cited financial challenges as their number one concern. Healthcare executive leaders are challenged with financing IT even while health systems continue to feel mounting financial pressure.

According to data from the Englewood, Col.-based Medical Group Management Association (MGMA), IT expenses for physician practices are on a slow and steady rise. Last year for example, physician-owned practices spent between nearly $2,000 to $4,000 more per full-time physician on IT operating expenses than they did the prior year. Total IT expenses per physician last year fell between $14,000 to $19,000, dependent upon specialty, according to MGMA.

What’s more, a 2016 cost and revenue report from MGMA found that physician-owned multispecialty practices spent more than $32,500 per full-time physician on information technology equipment, staff, maintenance, and other related expenses. In addition, technology costs have grown by more than 40 percent since 2009. Other trends in the healthcare industry, such as practices investing in online patient portals, have also contributed to increased technology costs.

Healthcare Informatics Associate Editor Heather Landi recently spoke with Gary Amos, CEO of Commercial Finance, North America, at Siemens Financial Services (SFS), about financing healthcare IT. Amos, who is based in the Philadelphia area, has been with the organization for 11 years. SFS finances both technology and healthcare equipment for Siemens Healthineers and other leading healthcare providers. Below are excerpts from that interview.

Webinar

Centers for Medicare and Medicaid Services (CMS) will require the use of Clinical Quality Language (CQL) for electronic clinical quality measures (eCQMs) reporting in 2019. But what is CQL? And…

How do you see the landscape around the financing of capital acquisitions in healthcare at this time?

There’s a couple of ways to approach it, and I recommend we extend our perspective beyond IT. I think from what we see in the market and where the digital transformation is driving healthcare you need to view it along the entire healthcare continuum—from the experience of the patient, to healthcare provider and finally from the viewpoint of a financial expert.

First, let’s view it from the consumer perspective. A technology transformation has patients relying on mobile apps, seeking information online and becoming more engaged and proactive in managing personal health. Physicians and providers who can offer their patients further customized and automated diagnoses are at a competitive advantage for patient retention. Providers who adopt new digital technologies and equipment are able to further automate and connect patient data across larger healthcare IT networks. This enables providers to manage data smarter and provide stronger diagnoses for patients, increasing speed, efficiency and leading to higher patient satisfaction.

I think from the provider standpoint, we’re currently evaluating different financial models and seeing how they can enable desired outcomes across a wide array of scenarios. You hear a lot in the market right now about MES or managed equipment services. It’s no longer about how we finance a single asset. The conversation is shifting to how we are enabling larger projects that include not only the diagnostic equipment required, but involves services and performance-based metrics that allow for technology evolution and planning cycles over a longer period of time. The new demand for capital is in financing a bundled package with a commitment to a level of service in a formalized agreement with underlying performance metrics in place. Today’s healthcare providers require a return on investment with tighter budgets and being tasked to do more with less. That’s why bundled services that can promise specific outcomes are highly desired by today’s providers.

Now from the financial expert’s perspective, we are helping providers explore financing options that extend beyond a short-term goal. It’s about looking at needs over a longer planning horizon, determining the right equipment to support those needs and how we can structure the financing to improve equipment performance and consider asset longevity as the demands for digital technology evolves.

Healthcare organizations continue to face financial pressures. What are some financial techniques that healthcare organizations can use to meet today’s digital demands?

According to the Centers for Medicare and Medicaid Services (CMS), U.S. healthcare spending grew 4.3 percent in 2016, and as a share of gross domestic product, it accounted for nearly 18 percent of U.S. spending. Though healthcare spending is up, budgets are still tight and the challenge is increasingly becoming how we can provide a more precise diagnosis to foster individualized prevention and therapy. In addition, how do we reduce the time frame of diagnosis and treatment and improve the patient experience across the continuum of care? In the past, your treatment or your protocol might have run a course of a number of months. Providers who can reduce the amount of time that’s required to treat or prevent illness will find themselves in a stronger financial position. Reimbursement of resources and capitation payments are driving the headwinds for hospitals, physicians and outpatient centers.

For example, when a primary care provider signs a capitation agreement, a list of specific services for patients must be included in the contract. The amount of the capitation will be determined in part by the number of services provided and will vary from health plan to health plan, but most capitation payment plans for primary care services include preventive, diagnostic, and treatment services, such as injections, immunizations, and medications administered in the office, outpatient laboratory tests done either in the office or at a designated laboratory, health education and counseling services performed in the office, and routine vision and hearing screening.

It is not unusual for large groups or physicians involved in primary care network models to also receive an additional capitation payment for diagnostic test referrals and subspecialty care. Through healthcare providers adopting such plans, managed care organizations can control healthcare costs and hold their physicians accountable to receive improved services.

What should healthcare CIOs and CTOs be thinking about right now?

I think from a CTO/CIO standpoint, a lot of what happened in the past is they were focused on EMRs (electronic medical records) and as those platforms became stable that allowed for the evolution of a more digitalized age. You were no longer moving patient records in a manila folder from doctor to doctor. It is now being moved through online platforms, mobile devices and being provided to your doctor with a holistic view of the patient’s records and data. Today’s executives need to be concerned with adopting digital technology and equipment that integrates data exchange and enables population health management. For example, if a clinician has a broader view of health patterns and trends across patients, it helps them to assess needs and transform care delivery models to improve the patient experience. Transforming care delivery is about leveraging established and new care models to provide more accessible and highly efficient healthcare offerings. For a leader in today’s healthcare environment the focus should be on digitalizing healthcare processes, expanding precision medicine, transforming care delivery and improving the patient experience.

With the overall trends in healthcare right now—population health, the transition to value-based care, and all the new regulations—how will this impact the financing of healthcare IT in the next few years?

As the country works to adapt to healthcare demands, private financing is uniquely positioned to take a leading role in supporting today’s digital market shift. An aging population, chronic conditions rising, and structural changes from the Affordable Care Act (ACA) impose many financial pressures on healthcare providers. Complex, clinical procedures are on the rise, but investments in technology can help make these procedures simpler. In order to meet consumer demands and keep U.S. healthcare infrastructure, technology and services modernized, the healthcare sector requires some serious investments. With today’s digital transformation overhauling healthcare, this is where private funding sources can step in to help by enabling organizations to keep pace through updated IT infrastructure.

And, again, you’re seeing financial models evolving as a result of all this, is that right?

Whether it’s a large institutional-type hospital or a smaller-scale physician owned practice, everyone will have a call to action to try to transform their business and operational model, using the technology that’s available. Some of the traditional financing products, such as loans or equipment leases, will remain but could take shape or form into different structures. Unitary payment models where there is a more holistic approach to healthcare management and financing will drive the digital transformation. Coupling payment models for equipment and services together will continue to be challenged and the unitary structures will move to the forefront of discussion.

The digital transformation of healthcare technology, through connecting patient data across greater IT networks, will require financial models to evolve with the acceleration of technological advancements. As healthcare technology becomes more automated, service and delivery methods will become more patient-centric than ever before. Financial models will enable healthcare providers to accomplish their clinical and operational goals through the adoption of digitized information technology.