Let’s assume that a client has the client secret exposed somewhere. What are the risks that the client and its users are exposed to? Are those the same as having implemented the implicit flow from the begin with?

I would say that, the risk here is for an attacker stealing a code, and since the client secret is available, assuming no other form of client authentication is performed, then the attacker would be able to exchange the code to token. So it looks like it’s similar with the risks of implicit flow, but a bit more secure since by default the tokens are not exposed in the user-agent (implicit flow could use for example response_mode=form_post and avoid that scenario)

I am working on a hobby project which will involve a web server (hosted and owned by me) and a native app (which will communicate with the web server periodically) an end-user can install via a deb/rpm package. This native app has no traditional UI (besides via command line) and can be installed on browser-less environments. Additionally, I’m trying to avoid registering custom URL schemes. As such, I do not wish to use redirect flows, if possible.

The web server and the native app will both be open source and the code will be visible to everyone, but I suppose it shouldn’t matter in the context of authentication. However, I wanted to point that out in case it matters.

So far, during my research, I’ve come across two mechanisms which seem suitable for what I am trying to achieve:

Resource Owner Password Credentials Grant

Device Authorization Grant

Unfortunately, I’ve come across a lot of articles and blogs stating that Resource Owner Password Credentials Grant should no longer be used. Not sure how much weight I should give these articles, but I’m leaning towards Device Authorization Grant for now.

From my understanding, one of the steps involved in this grant is the client will continuously poll the server to check if the user has authenticated the client. However, instead of polling the server, why not flip the place where the code is entered?

In other words, instead of the client/device displaying a code to the user and the user then entering the code on the server, why not display the code on the server and have the user enter the code into the client? This way the client doesn’t have to needlessly poll the server? Does this not achieve the same thing? I’m really not sure though. I want to ensure I’m not missing something before I implement this.

This is how I envision the general flow for users using my project:

The user would register an account on my site (i.e, the web server). This is just a traditional username and password authentication.

The user can then download and install the deb/rpm package which contains my native app. Although, it should be noted that there’s obviously nothing preventing the user from installing the package without registering an account on the server. The whole point of this authentication is create a link between the account on the server and the native app.

Prior to enabling the daemon/service functionality of the native app, the user will need to authenticate the native app to the server.

To do so, the user can log into the server (using their regular username/password creds) and generate a temporary token.

The user can then use the CLI functionality of the native app to use this temporary token. For example, the user may type my_app_executable authenticate, where my_app_executable is the binary executable and authenticate is the parameter.

This will prompt the user to enter their username and the temporary token.

The app will then send the entered username and temp token to the server which will validate this combination. If it’s valid, the server will send a access token back to the app.

The app can then use this access token to communicate with the server. Authentication complete.

Based on this, I have a couple of questions:

Does this flow seem secure? Is there an aspect of this that I’m overlooking?

Is it okay to more or less permanently encrypt and persist this access token on the filesystem? If the user turns off the native app for months and then they turn it back on, I would like it to function normally without making the user authenticate again. I suppose I’ll need to implement a way to revoke an access token, and I’m thinking about tracking this in the database on the server side. This would mean that for each HTTP request from the app to the server, the server will need to make a DB check to ensure the access token hasn’t been revoked.

I am wondering if a paladin can boost a summoned demon’s saves against hostile effects at all – it’s not exactly “friendly” in most cases – and, if the demon does get the paladin’s +CHA to saves, if it can use that bonus to break free from control.

While this seems straightforward if you take “friendly” to mean the creature’s true intent and beliefs, it is less simple when considering a creature that is being controlled. If someone has been charmed into working as an ally, for example, would they benefit from the paladin’s aura?

Aura of Protection

Starting at 6th level, whenever you or a friendly creature within 10 feet of you must make a saving throw, the creature gains a bonus to the saving throw equal to your Charisma modifier (with a minimum bonus of +1). You must be conscious to grant this bonus. At 18th level, the range of this aura increases to 30 feet.

Summon Greater Demon

[…]At the end of each of the demon’s turns, it makes a Charisma saving throw. The demon has disadvantage on this saving throw if you say its true name. On a failed save, the demon continues to obey you. On a successful save, your control of the demon ends for the rest of the duration, and the demon spends its turns pursuing and attacking the nearest non-demons to the best of its ability.[…]

Demons (MM 53)

A mortal who learns a demon’s true name can use powerful summoning magic to call the demon from the Abyss and exercise some measure of control over it. However, most demons brought to the Material Plane in this manner do everything in their power to wreak havoc or sow discord and strife.

You can make an opportunity attack when a hostile creature that you can see moves out of your reach. To make the opportunity attack, you use your reaction to make one melee attack against the provoking creature.

The last benefit of the War Caster feat says (PHB, p. 170):

When a hostile creature’s movement provokes an opportunity attack from you, you can use your reaction to cast a spell at the creature, rather than making an opportunity attack. The spell must have a casting time of 1 action and must target only that creature.

Without the Crossbow Expert feat, all ranged attacks (including ranged spell attacks) made when an enemy is adjacent suffer this penalty (PHB, p. 195):

You have disadvantage on a ranged attack roll if you are within 5 feet of a hostile creature that can see you and that isn’t incapacitated.

As an opportunity attack normally grants a melee attack, does it seem reasonable to assume that the target remains at melee range for the spell attack granted by War Caster? If so, does this require ranged spell attack rolls to be made with disadvantage?

The trigger for an OA is a creature moving “out of your reach”. This suggests to me that the creature is out of the 5′ disadvantage zone, but it seems like that would preclude making a melee spell attack.

Do characters with the War Caster feat get the best of both worlds: being allowed to make either a melee spell attack or a non-disadvantaged ranged spell attack?

I’m working on an API that I’d like to be accessible internally by other servers as well as devices that I consider both as confidential private clients. Devices are considered private clients because the client_secret is stored in an encrypted area that prevents from unauthorised readout and modification (even though nothing is never bullet proof)

For auth, I’d like to use OAuth2 with the client_credentials grant that seems to be a very good fit for these use cases. However I’m wondering how flexible is the standard regarding sharing the client_secret.

Basically the RFC doesn’t say much about sending your client id / client secret, it just offers an example here: https://tools.ietf.org/html/rfc6749#section-4.4.2 which is very simple by using the following header Authorization Basic: base64(client_id:client_secret)

In my opinion, it could be slightly more secure by computing a hash:

the client requests a random to the server by sending their client_id

the server replies with a random code (valid for like 10 mins, just like an authorization code)

the client computes a hash = sha256(client_id, client_secret, code) and asks for a token

the server computes the same hash, compares the client hash with the computed hash and sends an access token if they match

It would add an extra layer of security in case https is somehow broken or if anyone is able to read the header somehow.

However it doesn’t seem very OAuth2 compliant and I don’t really like re-inventing a standard. Another option would be to create my own extention grant, I’m just wondering if it’s really worth it, like no one seems to have done this.

Also, if I want to share my API with a 3rd party app, not sure it’s a good thing to force them into using something non really standard.

Alter self can turn your body into anything as long as you basically retain the same body shape. It can even change your weight. Moreover, it can transform you into a member of another race.

So, this spell can, in theory, grant the caster flight in two ways:

Turn yourself into an Aarakocra

Reduce your weight to a bird’s, and turn your arm into wings

However, this is not explicitly RAW. The spell has the Aquatic Adaptation mode which explicitly grants the caster a swim speed equal to their walking speed; it doesn’t have this for flight. Further, the fly spell is a third level spell, which this use shortcuts to, albeit for one target only.