Does Alexa make Amazon relevant for consumer digital tech?

I was listening to a podcast where they were hyping up the new Echo Show thing with the screen.

Alexa seems to have mindshare among some bloggers/podcasters. I think they touted that it works better than Siri. It even got some publicity as one of the characters was talking to Alexa on Mr. Robot.

I would imagine Amazon intended Alexa to be a key part of their phone, which failed badly. So they've salvaged the development they did to sell these speakers and it seems for most people who get these Echo products, the main thing is to give voice commands to play certain music. Unless that is they're heavily invested in smart home products which support Alexa, but the smart home is still a niche, requiring installation and a real commitment to adopt IoT for the home early.

Why are people with iOS or Android devices bothering with Alexa devices? Amazon could be ahead of Apple in AI but surely not Google or MS?

Now there are rumors that Apple and MS are planning their own AI/digital assistant fixed devices for the home as well. Google has put out its Google Home product but supposedly, Amazon has 70% market share and Google may have to come out with some new iteration with screens as well.

Of course Amazon isn't releasing actual sales figures but the fact that they continue to crank out Echo devices suggest there's a business there. Yet I would guess a fraction of smart phone sales, even smart phones over $400.

Wall Street is bullish on Amazon as still a growth stock, unlike Apple (and maybe the others). Some of it has to do with their cloud business, some of it has to do with the continuing conversion of retail to online consumer spending. Can Alexa win the AI crown for Amazon?

I don't have a very high opinion of the critical thinking skills of tech pundits in general, but the fawning over Echo is one of the silliest things I've seen in years.

First, it's not better than either Siri or Google Assistant. Yes, Alexa can do a wider range of things, because Amazon permits arbitrary extensibility by third parties. But there's a very good reason Apple and Google haven't enabled something like this. Building a language model for a particular domain (say, messaging or home automation) is hard. Most developers aren't really up to it. Amazon just punts on this problem. When you're creating an Alexa 'skill,' you're basically just asked to provide a bunch of ways people might phrase commands. In many cases, this means using Alexa features requires extremely rigid syntax. It's a verbal command line.

I picked up an Echo Dot when it was on sale last year, to check it out. I've got some Hue lights I wanted to use it with. Here's an example of the sort of thing I'm talking about (from memory, phrasing may not be exact):

Quote:

"Alexa, turn on all lights."

"There are multiple devices of that type. Which one did you want to turn on?"

"All of them."

"Sorry, I couldn't find a device group with the name 'all of them.'"

Great, that's useful. ("Hey Siri, turn on all lights" works fine, in contrast.)

But in the pundit fawning over Alexa, this trade-off is swept under the rug, and Alexa simply gets credit for offering access to more services. That, plus the fact that the dedicated hardware is obviously a lot better at picking up a command at a distance than a phone (which also has a flip side, in my experience — far more instances where the device erroneously thinks it has heard its name), cause people to declare that Amazon has some substantial lead over Apple here, when if you actually consider the difficulty of the problems each company has solved, it rather seems like it's the other way around.

Second, this entire product concept is still pretty iffy. Will ambient access to voice assistants be ubiquitous eventually? Sure. But conversational interfaces are, today, still sufficiently primitive that this product category just isn't very interesting yet, and even as they mature, voice-only and voice-first interface will likely never replace visual interfaces as the primary means of interacting with computing devices. There just isn't a huge amount of strategic importance to this market now, and there may not be for quite a while.

In particular, I find the whole narrative that Apple is screwing up because they don't have an Echo-like device yet to be completely bizarre. The hard part of this is the software. Apple has Siri, and continues to improve it. Apple has access to far more data to guide those improvements than Amazon does. Apple already has devices that can deliver its conversational UI in your pocket, on your wrist, and via AirPods, practically in your skull. Isn't mobile supposed to be the future? Why, in this case, have pundits decided there's something magical about a device that sits in a fixed location in your home? Even if you assume there's something important here, isn't it obvious that there are far lower barriers to Apple shipping a successful Echo-like device than there are to Amazon shipping successful phones and watches?

I don't have a very high opinion of the critical thinking skills of tech pundits in general, but the fawning over Echo is one of the silliest things I've seen in years.

They're super trendy among the Kara Swisher type pundits-- these guys 'get' industry, but they don't usually get the tech right. They play with every gadget but deep-dive into very few. They (pundits) are not all bad, but the danger is that these people have the ear of businesspeople because they speak the same language. When they get it wrong, they can do some damage.

Quote:

Second, this entire product concept is still pretty iffy. Will ambient access to voice assistants be ubiquitous eventually? Sure. But conversational interfaces are, today, still sufficiently primitive that this product category just isn't very interesting yet, and even as they mature, voice-only and voice-first interface will likely never replace visual interfaces as the primary means of interacting with computing devices. There just isn't a huge amount of strategic importance to this market now, and there may not be for quite a while.

But if it's inevitable, someone's going to do it. It's not as though Google and Apple disappear from the scene if Amazon pulls back from this market. Most stuff starts out less than fully cooked.

The possible criticism here is that Amazon has a tendency to throw everything against the wall and see what sticks. Echo. Echo Dot. Echo Tap. Etc. Etc.

Quote:

In particular, I find the whole narrative that Apple is screwing up because they don't have an Echo-like device yet to be completely bizarre. The hard part of this is the software. Apple has Siri, and continues to improve it. Apple has access to far more data to guide those improvements than Amazon does. Apple already has devices that can deliver its conversational UI in your pocket, on your wrist, and via AirPods, practically in your skull. Isn't mobile supposed to be the future? Why, in this case, have pundits decided there's something magical about a device that sits in a fixed location in your home? Even if you assume there's something important here, isn't it obvious that there are far lower barriers to Apple shipping a successful Echo-like device than there are to Amazon shipping successful phones and watches?

Yes, but they're different.

If you say "phone, play Bohemian Rhapsody" while you're out in public, you're being a jerk.

Having a phone constantly listening anywhere *other* than home is a privacy and security concern. In your own house, that's your business.

First, it's not better than either Siri or Google Assistant. Yes, Alexa can do a wider range of things, because Amazon permits arbitrary extensibility by third parties. But there's a very good reason Apple and Google haven't enabled something like this. Building a language model for a particular domain (say, messaging or home automation) is hard. Most developers aren't really up to it. Amazon just punts on this problem. When you're creating an Alexa 'skill,' you're basically just asked to provide a bunch of ways people might phrase commands. In many cases, this means using Alexa features requires extremely rigid syntax. It's a verbal command line.

This may be true of Alexa, but it's not true of Cortana - Cortana Skills work off of "intents" where MS does the natural language processing and lets your application know what it thinks the user was talking about with probabilities for each intent. So you can build skills without building the language model (though you can do that too), without being a closed box.https://www.luis.ai/

In my limited experience, Alexa crushes Siri in usability and accuracy.I'm an iPhone user that gave up on ever using Siri. She's terrible with name recognition, ask her a research question about food and she only ever responds with directions to restaurants.

A couple of phones back I started asking Siri a set of questions and would note the response. With future versions I'd ask the same question. Rather than her responses improving, more of the questions would get answered with responses like "I'm not sure what you mean."

The extensibility of ALexa, and "intuition" of Cortana have been leaving Siri in the dust.

I don't have a very high opinion of the critical thinking skills of tech pundits in general, but the fawning over Echo is one of the silliest things I've seen in years.

That's putting it mildly. It's goddamn stupid is what it is. It amounts to nothing more than an "always on" bug/listening device with the main purpose to collect as much data as possible. But it's sooooo damn cute isn't it? It's a fucking surveillance tool in a cutesie, innocent looking disguise.

Every time I see see people looking at those things in the mall, I make a point of going over and pointing out the above. They usually say something to the effect: "Wow, never though of that" and walk away EVERY time. Yeah I'm a real douche like that.

And we're going to trust Amazon, Google, and Facebook, all companies making their money through data harvesting and stockpiling why? To surveil us why?

I have regular access to both and don't find this to be the case. Alexa is somewhat better at answering some types of trivia questions, Siri makes up for it by having more sophisticated language models around things like home automation, and by having more knowledge of useful context due to iOS integration. Echo, as I mentioned, is somewhat better at picking up a voice command at a distance than a phone, but this probably has more to do with easy-to-match hardware (Echo has seven microphones, with beam-forming tech) than with hard-to-match software smarts.

The real story here is that the scenario where Google runs away with AI-dependant markets, because their search engine and very public investment in machine learning give them some unsurmountable advantage, does not particularly appear to be panning out. Others can competently play in these product categories.

In my limited experience, Alexa crushes Siri in usability and accuracy.I'm an iPhone user that gave up on ever using Siri. She's terrible with name recognition, ask her a research question about food and she only ever responds with directions to restaurants.

A couple of phones back I started asking Siri a set of questions and would note the response. With future versions I'd ask the same question. Rather than her responses improving, more of the questions would get answered with responses like "I'm not sure what you mean."

The extensibility of ALexa, and "intuition" of Cortana have been leaving Siri in the dust.

But if you're driving or away from home, you can only use Siri. So you don't use Siri to call numbers at least while driving?

Maybe that's the limiting factor of Alexa, that it's tethered to your home and to really get value out of it, you have to invest in smart home devices which Alexa can control.

In my limited experience, Alexa crushes Siri in usability and accuracy.I'm an iPhone user that gave up on ever using Siri. She's terrible with name recognition, ask her a research question about food and she only ever responds with directions to restaurants.

A couple of phones back I started asking Siri a set of questions and would note the response. With future versions I'd ask the same question. Rather than her responses improving, more of the questions would get answered with responses like "I'm not sure what you mean."

The extensibility of ALexa, and "intuition" of Cortana have been leaving Siri in the dust.

But if you're driving or away from home, you can only use Siri. So you don't use Siri to call numbers at least while driving?

Maybe that's the limiting factor of Alexa, that it's tethered to your home and to really get value out of it, you have to invest in smart home devices which Alexa can control.

When driving I use my car's interface.

With Alexa and Cortana growing into portable things will of course change. Amazon is already experimenting with ads in Alexa - at least I don't hear Apple saying that about Siri. Not yet anyway.

That tell me you have really low bar expectations. It tell me that you're willing to put up with utter shiiite. Auto manufacturers will NEVER learn. They are falling into the same vicious cycle they always fall into and that's believing that they can "do it better 'cause". Now that Google has tempted them with the "Android in the car" bait, they're falling for the trap because they simply don't want to do the fucking hard work. They'd sooner let another company do the heavy lifting for them and bestow upon them (FOR FREE) something they believe they can "modify" and "customize" to suit their customer's tastes or whatever. They're much better off ceding that interior UI to a company that actually *knows* good, refined UI. Google is not a UI company. Google is an ad company with the singular goal of collecting as much specific user data as possible, so they can shrink wrap it all up and sell.

The end result for automakers? The same bag of hurt, disparity and confusion that exists today. The difference is now they'll be saddled with pushing security updates to their system because android has become the largest malware target the earth has ever seen. Some of those updates will never come.

/rant

Anyway, I'm the opposite. I NEVER use any of the SYNC features anymore. Music, Radio, Phone, Navigation, and Voice are all through iPhone.

Anyway, I'm the opposite. I NEVER use any of the SYNC features anymore. Music, Radio, Phone, Navigation, and Voice are all through iPhone.

I'm a very big proponent of handsfree/in-dash. Especially after being rear-ended last summer by a guy in a 36' delivery truck doing 80mph while lookinghis phone.

But to be clear - I use my phone's google maps versus the car's (2012 Audi) sat-nav. It's not great and, as I don't utilize/pay for the car's built-in 3G service, it's interface is poor.But for making calls, checking voice mail it works just fine.

Telling that the most frequently issued voice commands are to play songs, set alarms or check the weather.

I still believe that the Echo is worth the price of entry even if all you ever do with it is use it as an internet radio player. The connected home stuff is just gravy. On that front, both Google and Siri are completely beaten.

I do not share Ed M's frankly, tired and breathless non-concerns about these being "bugs" or "spying devices". Anyone with a copy of Wireshark can prove when its listening and when its not. Hell, you can connect to the USB debug port on it and get on the OS if you want, at least in the case of the Echo.

Sure, the tech is in its infancy, but the contextual stuff will get better over time. Moderate your expectations and treat it like a voice command line and not the Star Trek computer.

I do not share Ed M's frankly, tired and breathless non-concerns about these being "bugs" or "spying devices". Anyone with a copy of Wireshark can prove when its listening and when its not. Hell, you can connect to the USB debug port on it and get on the OS if you want, at least in the case of the Echo.

I think there are more subtleties to the argument than that, but certainly (the general) you can look at the device in its state and have some idea of what it's doing. My wife is beyond paranoid of these things and passed by few opportunities to bad mouth the whole concept of an always-listening device spying on you (never mind the 'dot' requiring a button press I think?) while being willing to use Siri on her phone--she didn't like it, but it was "ok" as long it wasn't always in "hey Siri" mode.

I got the 'YOU BETTER NOT EVER BUY ME ONE OF THOSE FOR ME' warning many, many times for Alexa specifically and voice assistants in general.

Then Apple came out with their version and all of a sudden it's magically OK because it's Apple. She still wants the ability to turn it into a "must press button to listen" mode -- again, defeating the whole purpose if you don't have a bunch of devices sprinkled here and there, so her solution is wanting to buy at least two. SMDH

All of them will get better with time, but Siri is lagging and one of the selling points for Apple is being able to interface with each family member's voice and tie into their Apple accounts so everyone can play their iTunes library or interact with their own apps etc. (I despise this ecosystem lock-in) I have a dumb stub of an Apple account that doesn't have anything much about me since 2010 or so. That selling point of leveraging Siri history and all the other metadata Apple collects really only applies to people locked in to the ecosystem already-- at least until it builds up metatdata. I'm curious to see how that might work in practice as I'm not going to play along with their reindeer games... but not $350 curious.

I'm still forbidden from even considering another brand, and I'm so done at this point. Decisions about voice assistants are being made by consumers en masse based on emotion and branding-- and not logic or testing.

I agreed that Alexa is a better assistant than Siri. Sometimes Siri doesn't understand my request and just give me a random result it found on the Internet, I'm quite disappointed with its performance. But I must say Google Home is competitive too. Even though it was born later, it is going strong in sales and gain more market share from Echo. It's interesting to see the competition between Google and Amazon, in the end, we as consumers will benefit best from the technology. More on this battle can be found here: https://www.1and1.com/digitalguide/online-marketing/web-analytics/google-home-vs-amazon-echo-a-comparative-study/

Bezos: “Get that idea shit out there fast so people recognize us and we’ll worry about any potential shortfalls later. Oh, and by the way, we’ll collect as much as possible and send it over the webs so we can store it on our servers.”

But if you're driving or away from home, you can only use Siri. So you don't use Siri to call numbers at least while driving?

Maybe that's the limiting factor of Alexa, that it's tethered to your home and to really get value out of it, you have to invest in smart home devices which Alexa can control.

Anecdotally, I find while driving is the only place that Siri is useful. I have a few echo's at home, and find them much more useful in that scenario because * They work much better with voice only interactions (e.g. I'm moving around the house, doing other things, and I don't want to unlock a device, look at a screen, heck, even locate the device) * "Alexa, play XYZ" (with spotify set as the default) works for me about 90% of the time. On Siri I honestly gave up, as whenever I tried to migrate from "Testing the device" (looking at it, painfully and precisely enunciating every word, etc) and interacting with it conversationally, it failed. The Alex doesn't (for that particular scenario). * Like many (I suppose - based on my, again, anecdotal experience) I use alexa to control my hue lights, set timers and alarms, play a song I have a sudden hankering for, and ask what the weather is while I'm getting ready to leave. For the price of a 30 dollar dot (plugged into a stereo for music) it's absolutely great. The form factor is a big advantage in this use case to the Echo's -- but apple home at 350 vs. dot's for 30 are just a completely different market.

While driving, I find one thing about Siri absolutely frustrating -- "Siri, close Waze". I get Apply is trying to force me to conform to the "don't close the apps" -- but I think the scenario of "in a hands free manner I want to stop waze from interacting with location (because I'm done driving/navigating)", there isn't any other solution -- and I think wanting to turn off pinging of location is not a tin-foil or control freak type scenario -- it has a definite impact on battery life. So, much like the magic of alexa in the house boils down to a few mundane actions, Siri in the car boils down to * Call XYZ * Open Waze * Open Spotify * Open Downcast (podcast app)i.e. a glorified task switcher.

I think in the home Alexa is much more relevant - integration, more proper support for non-apple services, and reasonably priced hardware. I think Google is the "real" competitor there -- and my reason for not going that route was a notion that the hardware would probably work pretty will for the 1-2 years google was supporting it, then die. This absolutely might not happen, but no one can deny that there is precedence. Amazon, comparatively, seemed/seems to be much more committed, and has a better track record in this regard.

Jeez Ed, hold up a second. Is this the kind of person you really are? Someone that judges another for having a physical deformity or medical condition?

Hey, I find him as creepy as Eric Schmidt. As far as the comment, it was in jest. Still, I'd bet with all his money, he has grown a pretty thick skin (sticks-and-stones and all that)to worry about what people on the Internet say or think about him.

I'm just shocked that people in general are so goddamn clueless and seemingly unconcerned about SHIT LIKE THIS.

There was a time when I was concerned about Microsoft taking over the world. Now I believe that Amazon, Google, and Facebook have taken things to new and unprecedented levels scary.

More people should follow Aral because he's correct in saying that these AdTech companies dress things up in a way to make them appear cute, cuddly, and safe when in reality they're cancer waiting to strike.

To the extent that you think calling people out for making fun of disabilities is “PC” then, sure, I’m happy to be the PC police. Do I get a badge?

But let’s get real. Making fun of people with a lazy eye has never been acceptable behavior. Not since the advent of “PC” and not before either. It’s juvenile and mean spirited, and has always been considered juvenile and mean spirited for as long as I can remember and that’s well before the term “politically correct” ever got coined.

You said something that you know you shouldn’t have. The correct response is apologizing, not doubling down.

Jeez Ed, hold up a second. Is this the kind of person you really are? Someone that judges another for having a physical deformity or medical condition?

Hey, I find him as creepy as Eric Schmidt. As far as the comment, it was in jest. Still, I'd bet with all his money, he has grown a pretty thick skin (sticks-and-stones and all that)to worry about what people on the Internet say or think about him.

I'm just shocked that people in general are so goddamn clueless and seemingly unconcerned about SHIT LIKE THIS.

I don't have a very high opinion of the critical thinking skills of tech pundits in general, but the fawning over Echo is one of the silliest things I've seen in years.

First, it's not better than either Siri or Google Assistant. Yes, Alexa can do a wider range of things, because Amazon permits arbitrary extensibility by third parties. But there's a very good reason Apple and Google haven't enabled something like this. Building a language model for a particular domain (say, messaging or home automation) is hard. Most developers aren't really up to it. Amazon just punts on this problem. When you're creating an Alexa 'skill,' you're basically just asked to provide a bunch of ways people might phrase commands. In many cases, this means using Alexa features requires extremely rigid syntax. It's a verbal command line.

This. Alexa may seem useful now, but it's a technological dead-end. Amazon took huge shortcuts in order to release something useful now and they're left with no base to build on. At least siri can get better.

Asside from that, their hardware sucks and they put as little work into their software as possible. The FireTV stuff is sluggish in spite of having hardware that should be able to run their skinned Android smoothly and they ruined the UI with heavy-handed advertising. They cut too many corners on their tablets trying to meet certain price points (they're the embodiment of Android). The only good devices that Amazon makes/sells are the e-ink Kindles, and that's only because the niche market that buys them wouldn't put up with the bullshit they anchor the rest of their products with.

To be fair, my Kindle Voyage is one of my favorite devices. I'm not biased against Amazon, just disappointed with them.

++ Amazon’s approach to voice assistants, while it enjoys a ton of mindshare, is fundamentally brittle. Natural language parsing and an intents system is almost certainly the future. That said, despite its technical drawbacks there are a ton of advantages of being first. Amazon at this point has a ton of data on how people would like to ask Alexa to do things. Every time someone tries something, it fails and then they rephrase till it works is data. They also have a ton of developers loyal to the product (low loyal isn’t clear...>

So IF (and it’s a big if) they can make the transition to natural language they will have a wealth of data and an existing developer base to build on.

I have personally meet with some of the people on Amazon's Alexa intents team so I know they're working on it (and are using it for some Alexa queries), but it's not available via skills yet. There's an API refactoring that's needed to bring that online.

I have personally meet with some of the people on Amazon's Alexa intents team so I know they're working on it (and are using it for some Alexa queries), but it's not available via skills yet. There's an API refactoring that's needed to bring that online.

The transition period could get messy. Natural language could be much better overall, but if it starts failing at the stilted Alexa-speak that people are used to, that will be a poor experience for users. And a natural language + intents system will need more nuanced ways to disambiguate when the same function can be performed by multiple “skills”. Then there’s getting devs on board...

It’s totally the right direction and it’s zero surprise they’re working on it (I think I even had it’s rollout in my 2018 predictions) but the transition isn’t a slam dunk. And it could be an opportunity for competitors. One of the benefits to devs of the Alexa approach is that it’s quite easy to code for. If you’re going through the effort of designing your skill for a natural language intents system, why NOT bring it to Google and HomePod?

Or people are fine with the state of things the way they are, don't mind using rigid syntax to do some simple tasks for now.

May be that people aren't interested in having conversations with these things.

Remember, the most widely used commands are to play music, check weather, set timers.

People may not want to ask computers things like in Star Trek, beyond the novelty of it.

OK, say it gets good some day that all Google searches are done through voice. But would you want the results read back to you or displayed on a screen so that you can browse through them?

This civilization is based heavily on the written word. Gutenberg was significant because we would not be where we are today if the only way information and knowledge was conveyed was through the oral tradition.

Text input and display will never go away, so why try to develop some elaborate voice-centric UI? Other than while in a car or similar situations, why would you want to make voice queries with results read back to you versus the current system?

Or people are fine with the state of things the way they are, don't mind using rigid syntax to do some simple tasks for now.

May be that people aren't interested in having conversations with these things.

That seems unlikely unless your feature set is really really small and the syntax is really really self-evident. I mean sure. I don’t doubt that there is a group for whom rigid syntax reminders plus calendars plus music is all they’ll ever want. I just don’t think it’s that big. We’ll see though. Horatio says they’re working on a natural language intents system. If, when they roll it out they lose customers... we’ll know you were right.

Or people are fine with the state of things the way they are, don't mind using rigid syntax to do some simple tasks for now.

May be that people aren't interested in having conversations with these things.

That seems unlikely unless your feature set is really really small and the syntax is really really self-evident. I mean sure. I don’t doubt that there is a group for whom rigid syntax reminders plus calendars plus music is all they’ll ever want. I just don’t think it’s that big. We’ll see though. Horatio says they’re working on a natural language intents system. If, when they roll it out they lose customers... we’ll know you were right.

They can roll it out but lets see if the nature of the interactions change that much.

You would agree they don't need NLP to perform the currently most popular things?

So once NLP is out, playing music, checking weather and setting timers will no longer be the most popular voice requests?

Or people are fine with the state of things the way they are, don't mind using rigid syntax to do some simple tasks for now.

May be that people aren't interested in having conversations with these things.

That seems unlikely unless your feature set is really really small and the syntax is really really self-evident. I mean sure. I don’t doubt that there is a group for whom rigid syntax reminders plus calendars plus music is all they’ll ever want. I just don’t think it’s that big. We’ll see though. Horatio says they’re working on a natural language intents system. If, when they roll it out they lose customers... we’ll know you were right.

They can roll it out but lets see if the nature of the interactions change that much.

You would agree they don't need NLP to perform the currently most popular things?

While I agree with the tightly worded statement, I don’t agree at all with the larger sentiment. I think some subset of the population is just fine with using a rigid syntax for a small feature set when that’s all they’re using it for. But it’s not everyone. And it doesn’t imply that rigid syntax will be acceptable when the feature set increases. When the expectations change with the long tail of skills, the mental load of keeping rigid syntax in your head for a large set of functionality will be too much.

Quote:

So once NLP is out, playing music, checking weather and setting timers will no longer be the most popular voice requests?

Over time as NLP improves, the long tail will grow out. The better the NLP and the more “skills” that make use of it, the longer the tail will grow. And that’s what really matters. Even if setting reminders and playing music remain number one until the end of time, there will still be pressure towards ever improving NLP, because rigid syntax becomes exponentially more cumbersome the longer the tail becomes.

I think some subset of the population is just fine with using a rigid syntax for a small feature set when that’s all they’re using it for.

I don't think we're seeing so much of the rigid syntax right now. You see it when using Skills - e.g. setting a lighting scene in Philips Hue is "Alexa, turn on 'movies' in the basement", works, but "Alexa, set the scene to 'movies' in the basement" doesn't work. But with the built in Alexa responses, you can be much more flexible in how you address her. Are other people seeing rigid syntax for built in (non-Skill) responses?