Conversation

_UPDATE: This proposal was reworked many times throughout this PR. I am keeping this parent message the same for historical purposes, but you can find the closest description of the final design in this comment (with the exception that inspect became describe)._

Intent

The intent of this pull request is to propose a design for the present and future of kubecfg. The conversation from #1167 was the primary input into this design. The idea is to have kubecfg solve two use cases:

Provide a great user experience to view the current status of a Kubernetes cluster and to execute simple commands on the cluster.

Provide a tool which will accept configuration with generic resources to submit changes to and diff against a Kubernetes cluster.

The problem with the current design is that it is too generic to effectively support (1). The proposed redesign enables custom commands per resource (e.g. kubecfg pods, kubecfg services, etc.) but generic subcommands to satisfy (2) (kubecfg submit, kubecfg diff, kubecfg reconcile, etc.).

Proposed set of subcommands

Only a subset of these have been implemented in the attached commit.

kubecfg2 - Parent command.

pods

list [<id>] - List pods in a column-oriented view (see Examples below). By default, the first argument passed in acts as a filter by ID.

get <id> - List info about a pod in a row-oriented view. Designed to show more metadata about a given pod, and over time can be enhanced to make navigating the cluster from a given pod very easy. Currently I'm adding a list of any replication controllers whose selectors match (and therefore who control) the selected pod.

delete <id> - Delete a pod.

create [-f <filename>] - In the future we can accept directories or even data from stdin.

rc - For replication controllers. Happy to use a different abbreviation but replicationControllers seemed too long.

list [<id>]

get <id> - RC's get command could become extremely rich. For now I've added Pods Status which queries for pods under this replication controller and shows counts for running, waiting and terminated, but in the future you could do diffs against the controller's PodTemplate and show counts for matching and non-matching pods. You could also show the first 3-4 pod ID's per category and then show <X more ...>, making debugging even easier.

create [-f <filename>]

update [-f <filename>]

delete <id> - I have removed the delete/stop/rm replication controller confusion in favor of just delete and resize, where delete takes an optional --force flag if replicas > 0.

resize <id> <replicas>

Removedrollingupdate <controller> <image> <time> - I've included this here for parity but I would like to propose taking it out. This is the one command that I don't think belongs in kubecfg ... it's too long-running, fraught with edge cases, and in reality would probably want to take more into consideration (like fluctuations in metrics, % of pods that come up successfully, etc.). I think this level of functionality belongs in an independent client that uses the API that can look at more components in the infrastructure than just kubernetes.

minions

list [<id>]

get <id>

services

list [<id>]

get <id>

create [-f <filename>]

update [-f <filename>]

delete <id>

Postponedresolve <id> - Just an idea, but it could give you a random IP/port combination to connect to. May be useful for quickly locating a node to test for a given service.

Postponedsubmit [<filename>...] - This command exists to satisfy the second use case above. Reconcile and submit any changes from a given set of config(s), from files or from stdin. If you had an entire directory tree of files that had configs that represented your cluster state, you could use this command to submit them all to either create or update your cluster.

Postponeddiff [<filename>...] - A dry-run version of submit.

run [-p <port spec>] <image> <replicas> <controller> - Same as today.

Examples

$ kubecfg2
kubecfg2 controls the Kubernetes cluster manager.
Find more information at https://github.com/GoogleCloudPlatform/kubernetes.
Usage:
kubecfg2 [command]
Available Commands:
version Print version of client and server
pods List, inspect and modify pods
rc List, inspect and modify replication controllers
help [command] Help about any command
Available Flags:
-a, --auth="/Users/sam/.kubernetes_auth": Path to the auth info file. If missing, prompt the user. Only used if doing https.
--help=false: help for kubecfg2
-h, --host="": Kubernetes apiserver to connect to
-m, --match-version=false: Require server version to match client version
Use "kubecfg2 help [command]" for more information about that command.

$ kubecfg2 help pods list
List one or more pods. Pass in an ID to filter.
Usage:
kubecfg2 pods list [<id>] [flags]
Available Flags:
-a, --auth="/Users/sam/.kubernetes_auth": Path to the auth info file. If missing, prompt the user. Only used if doing https.
--help=false: help for list
-h, --host="": Kubernetes apiserver to connect to
-m, --match-version=false: Require server version to match client version
-l, --selector="": Selector (label query) to filter on
Use "kubecfg2 help [command]" for more information about that command.

Other notes

I am currently using spf13/cobra but am also considering codegangsta/cli. The latter may allow us to write out the command definitions in a cleaner way and support bash autocompletion, but the former allows persistent flags. Open to using either one.

Some of these commands (especially get commands that show a lot of info) may result in lots of queries to the server. There are several ways to handle this: (1) don't consider it an issue, (2) offer a "succinct mode" that tries to do as little as possible and (3) over time backport functionality that is particularly useful directly into apiserver as new API endpoints. Especially with (3), you could consider kubecfg a proving ground for new convenience functionality to see whether it's worth implementing in apiserver directly.

File layout

cmd/kubecfg2/kubecfg2.go - All cobra and CLI-related code remains in this file/package.

pkg/kubecfg2/<subcommand>.go - All logic per subcommand lives in one file per subcommand.

pkg/kubecfg2/kubecfg2.go - Common functions used by cmd/kubecfg2 and pkg/kubecfg2 packages.

Open Questions

Can we leave rollingupdate out of this version for the merge? If it's needed for demo purposes, can it be reimplemented somewhere outside of this tool?

@erictune pointed out that get is not the best name for above. info and details are other options but ideas here are welcome.

This comment has been minimized.

For chatty commands to the API -- we should use this to shape the API. So -- option (3). Common things should be doable without tons of API calls.

We should think soon about machine readable output from various commands. It is super common for folks to script this.

Some documentation on the file formats would be useful. I assume that these are just serializations of the API payloads, but we should figure out how that maps -- especially around versions and update.

I like submit -- it basically says -- take this file/directory, figure out what it is and make it happen. This will grow into a rudimentary config system. This can be a can of worms and we will have to think a little here before it grows too big too fast too unstructured. Specifically:

Why do I have to create lots of files? Why not just a file that has subsections for various resources.

If I have an omnibus file, can I just select certain things out of it?

This comment has been minimized.

We should think soon about machine readable output from various commands. It is super common for folks to script this.

I am aiming towards having all the output of the resource commands be parseable in the sed/awk style, e.g. with consistent and parseable whitespace, but in general I believe the kubecfg resource commands (pods, rc, etc) should primarily optimize for human readability and convenience. The REST API is very usable and scriptable with curl, but not very friendly to people, so kubecfg should fill the niche of a person trying to interact with a k8s cluster. However, commands like submit and services resolve can have more of a focus of easily parseable output, since those are likely to be scripted.

Some documentation on the file formats would be useful. I assume that these are just serializations of the API payloads, but we should figure out how that maps -- especially around versions and update.

I think today kubecfg can parse and export both JSON and YAML, so I'd be fine to keep that. Did you mean something else?

I like submit -- it basically says -- take this file/directory, figure out what it is and make it happen. This will grow into a rudimentary config system. This can be a can of worms and we will have to think a little here before it grows too big too fast too unstructured. Specifically:

Why do I have to create lots of files? Why not just a file that has subsections for various resources.

If I have an omnibus file, can I just select certain things out of it?

What about parameterization of that file.

I think we can make it really flexible - you can use one file, many files, recursive directories, do selective updates, etc. Mostly will come down to which style we want to prioritize implementing first.

This comment has been minimized.

Wondering if you had thoughts on how submit might interact with version control.

Do I commit my desired state, after peer review, to git and then use kubecfg2 to submit that to k8s?

And, if there is parameterization of config, as Joe suggested, does parameter substitution happen before commit or before pushing to k8s?

Not stuff that this PR needs to address, but would welcome your thoughts.

I think we could support many different workflows. Some people may prefer to have a git repo with all their config and have a git hook on receive which runs kubecfg automatically to synchronize the config state with the cluster. Other people may prefer to run all the synchronizing commands manually but keep a record of all changes in a repo somewhere. I think that if we provide the primitive of "take this file and sync it up with the cluster," people can create their own ways of using it that fit them best. For parameter substitution, we can see what needs there are for templating and see if that's best done in kubecfg or directly in the REST API itself.

This comment has been minimized.

wrt machine readable output: while it is possible to hit the k8s API directly from curl, I'd still recommend we (probably via an option) provide easily parseable output:

Auth gets tricky from curl. If this (or another?) utility does nothing more than be curl+auth, that would be super useful.

Many (most?) users will learn and know the system through tools like this and won't want to learn anything about the API. They'll want to take a set of manual actions and automate them. We shouldn't force them to climb a complexity cliff to start doing automation.

We can add it later, of course, but I'd love to keep the awk/sed/grep/cut to a minimum.

This comment has been minimized.

I presume you are referring to the List/Get/Create/Delete subcommands, but each resource (Pod/RC/Minion/Service) may want to customize some subset of the subcommand fields (Use/Short/Long/Run/Flags), and if you parameterize all of them I think you end up just as well off writing out the full subcommands.

This comment has been minimized.

I had the same reaction when I wrote it initially, and tried a bunch of different things to try and decouple the output from the processing. The problem is that there's nearly no logic in these methods - it just takes data from client.List_/Get_/etc and prints it out. All the complexity lives either in the client library or is implemented in testable functions like getReplicationControllersForLabels or dataToObject.

This comment has been minimized.

The original pkg/kubecfg does not have any of the methods except the special replication controller methods Update, Stop, Resize and Run. For everything else (Pods/RCs/Minions/Services + List/Create/Update/Delete), the fetching logic is generic in cmd/kubecfg/kubecfg.go and the printing logic is generic in pkg/kubecfg/resource_printer.go. This generic decoupling is precisely the reason for the redesign - it doesn't enable any ability to print out custom information per resource/method. By blowing it out into a custom ResourceMethod function per combination, we now have the flexibility to fetch and print out whatever we want for each command the user submits.

You are right that you could still break out the printing logic into pkg/kubecfg2/pods.go and pkg/kubecfg2/pods_printer.go, but the logic in pkg/kubecfg2/pods.go would be so thin that it would just be one-line wrappers around the API.

Another way to think about it is that pkg/kubecfg2/<subcommand>.go are actually the printing methods, and if there is any complex API or querying logic it will get factored out (hopefully into the client or apiserver).

This comment has been minimized.

In that case, why not do the query by just calling the client method, and pass the resulting object into these printing routines? Why do these printing routines need to be passed a client? That doesn't seem right to me.

If I put my devil's advocate hat on even further, why not just replace the printing logic in pkg/kubecfg?

This comment has been minimized.

The thing is that we may want to fetch a number of different things to complete all of our printing. I think I see what you're getting at though - if you look at this through the lens of MVC, with C as kubecfg2, M as client.Interface and V as pkg/kubecfg2, then you would want kubecfg2 to do the M queries and send them into the pkg/kubecfg2 View code instead of pkg/kubecfg2 doing client.Interface queries directly.

We could do that, but it means that the code would be more verbose/complex as we'd need to be sure we query everything ahead of time and then marshal it into the templates. I do like the idea though, and it does yield a lot more flexibility with the output types though. I will give it a shot.

(Re: devil's advocate, the reason we need to replace more than just the printing logic is because we want the flexibility to query for a variety of data (e.g. not just the pod that we're GET'ing) before sending it to the printer, so we need that custom implementation per resource/method.)

This comment has been minimized.

the reason we need to replace more than just the printing logic is because we want the flexibility to query for a variety of data (e.g. not just the pod that we're GET'ing) before sending it to the printer, so we need that custom implementation per resource/method

This comment has been minimized.

I'm not super excited about 'rc' as the replica controller abbreviation. it's hard to parse for a first time user. Perhaps we could support both "short" ('rc', 'pd' ...) and long ('replicaController', 'pod', ..) names?

This comment has been minimized.

The intent of this pull request is to propose a design for the present and
future of kubecfg. The conversation from #1167#1167 was the
primary input into this design. The idea is to have kubecfg solve two use
cases:

Provide a great user experience to view the current status of a
Kubernetes cluster and to execute simple commands on the cluster.

Provide a tool which will accept configuration with generic
resources to submit changes to and diff against a Kubernetes cluster.

The problem with the current design is that it is too generic to
effectively support (1). The proposed redesign enables custom commands per
resource (e.g. kubecfg pods, kubecfg services, etc.) but generic
subcommands to satisfy (2) (kubecfg submit, kubecfg diff, kubecfg
reconcile, etc.).
Proposed set of subcommands

Note that only a few have been implemented so far in the attached commit.

kubecfg2 - Parent command.

pods

Would be nice if "pod" was a magic alias to "pods" so that both work.

list [] - List pods in a column-oriented view (see Examples
<#1487bee580fe8cb5_examples> below). By default, the first
argument passed in acts as a filter by ID.

Regarding output: Something that has served me well in the past with CLIs
was to make a set of top-level flags (or a single flag) that sets the
output mode for commands. Internally, the app collects data to be printed
in a slightly more abstract structure, and when it comes time to print,
apply the global formatting.

Internally the data can all be stored as key-value, but the varying output
lets people use the data in different ways. A idea.

get - List info about a pod in a row-oriented view.
Designed to show more metadata about a given pod, and over time can be
enhanced to make navigating the cluster from a given pod very easy.
Currently I'm adding a list of any replication controllers whose selectors
match (and therefore who control) the selected pod.

I think we should strive for mostly well-defined, limited commands that
can be composed. Be wary of jamming everything into a single command.

delete - Delete a pod.

create - In theory this can come from stdin as well.

Please handle file, stdin, and if possible an inline json string (as the
next arg).

What about a generic "select" operation that evaluates a selector? Like:

rc - For replication controllers. Happy to use a different
abbreviation but replicationControllers seemed too long.

If it is too long, it is too long everywhere - let's not abbreviate in some
places.

list []

get - RC's get command could become extremely rich. For
now I've added Pods Status which queries for pods under this replication
controller and shows counts for running, waiting and termianted, but in the
future you could do diffs against the controller's PodTemplate and show
counts for matching and non-matching pods. You could also show the first
3-4 pod ID's per category and then show , making debugging even easier.

delete - Straight REST delete without resizing first, as
per current behavior.

create

update

set - A slightly more generic version of
resize.

stop - First sets replicationController to 0, then
deletes it, like the current kubecfg behavior.

rollingupdate - I've included this
for parity but this is the one command that I don't think belongs in
kubecfg ... it's too long-running, fraught with edge cases, and in reality
would probably want to take more into consideration (like fluctuations in
metrics, % of pods that come up successfully, etc.). I think this level of
functionality belongs in an independent client that uses the API or kubecfg
that can look at more components in the infrastructure than just kubernetes.

I agree - I would be OK to lose this into a higher-level tool, or even just
a shell script that wraps kubecfg

minions

list []

get

services

list []

get

resolve - Just an idea, but it could give you a random
IP/port combination to connect to. May be useful for quickly locating a
node to test for a given service.

I would call resolve 'endpoints' and just return the endpoints of a
service - let the user pass it to "| head -1" or something.

What about creating and updating a service?

submit [...] - This command exists to satisfy the
second use case above. Reconcile and submit any changes from a given set of
config(s), from files or from stdin. If you had an entire directory tree of
files that had configs that represented your cluster state, you could use
this command to submit them all to either create or update your cluster.

Available Commands:
version Print version of client and server
pods List, inspect and modify pods
rc List, inspect and modify replication controllers
help [command] Help about any command

Available Flags:
-a, --auth="/Users/sam/.kubernetes_auth": Path to the auth info file. If missing, prompt the user. Only used if doing https.
--help=false: help for kubecfg2
-h, --host="": Kubernetes apiserver to connect to
-m, --match-version=false: Require server version to match client version

Use "kubecfg2 help [command]" for more information about that command.

Will you cache things like KUBERNETES_MASTER, for example, so I don't have
to set them?

$ kubecfg2 help pods list
List one or more pods. Pass in an ID to filter.

Usage:
kubecfg2 pods list [] [flags]

Available Flags:
-a, --auth="/Users/sam/.kubernetes_auth": Path to the auth info file. If missing, prompt the user. Only used if doing https.
--help=false: help for list
-h, --host="": Kubernetes apiserver to connect to
-m, --match-version=false: Require server version to match client version
-l, --selector="": Selector (label query) to filter on

How does flag parsing work? Can I say

kubecfg -a="/tmp/foo" pods list -l "foo in (bar)" or do flags have to come
at the end? I think that doing what th euser intuits around flags is
something that can make or break a CLI experience.

Use "kubecfg2 help [command]" for more information about that command.

Some of these commands (espeically get commands that show a lot of
info) may result in lots of queries to the server. There are several ways
to handle this: (1) don't consider it an issue, (2) offer a "succinct mode"
that tries to do as little as possible and (3) over time backport
functionality that is particularly useful directly into apiserver as new
API endpoints. Especially with (3), you could consider kubecfg a proving
ground for new convenience functionality to see whether it's worth
implementing in apiserver directly.

File layout

cmd/kubecfg2/kubecfg2.go - All cobra and CLI-related code remains in
this file/package.

pkg/kubecfg2/.go - All logic per subcommand lives in one file per
subcommand.

pkg/kubecfg2/kubecfg2.go - Common functions used by cmd/kubecfg2 and
pkg/kubecfg2 packages.

This comment has been minimized.

I've been going back and forth on whether it will help or hurt to pull them out. The stuff I'm doing inside the function feels like it's inherent to the subcommand definition itself, which is why I've left them in, but if it gets even remotely more complex, yes, it should be pulled out.

This comment has been minimized.

General comment, all the nitpicking is fun, but I think you're best served
by fleshing out the prototype and general model, adding commands, figuring
out patterns for input and output, etc. and then we can come back and
nitpick it to our hearts' content. :)

This comment has been minimized.

Yup. I just finished all the commands for pods|replicationControllers|minions|services. The only major comment so far is around @lavalamp's dislike of the conflation of view and model, along with the related desire to keep machine-parseable output as an option, so I'm working on modeling out both of those now.

This comment has been minimized.

Yeah, I do appreciate this proposal even if I'm complaining a little bit. :)

It may be worth keeping in mind that we're going to build some sort of config thingy (@bgrant0607 has a proposal issue), which my mental model has as a layer on top of (or in parallel with) kubecfg. People using kubecfg are kicking the tires or debugging their cluster, it's doesn't currently look like it will end up as the recommended way to run your applications... Unless you make it super awesome before the config thingy lands, I suppose... :)

This comment has been minimized.

Is the config thingy not just some implementation of the submit subcommand described above? Then this kubecfg become a pretty good tool to handle the two use cases I listed out, managing your cluster and sync'ing config to it. @bgrant0607 would be very curious to hear your thoughts.

This comment has been minimized.

You might consider putting off submit to a future PR, unless you expect it to significantly alter your design here. It's a tar baby. Also , updates are the killer app for config, IMO. If kubecfg2 doesn't do rollingupdate (for which I opened #1353), there may not be much point to supporting config.

I'd also like the config mechanism to be able to handle api plugins (#991) generically, which is sort of contrary to what you're trying to do here.

Maybe submit should just live elsewhere.

If submit and rollingupdate live elsewhere, you might consider only supporting opinionated, special-case updates, such as resize and add/remove labels.

I'd reinforce what others said about designing this to be scriptable. I kind of expect a good CLI to be imperative and scriptable. With scriptability, 2 alternative approaches would be piping commands together or implementing a full-blown scripting system. (I've seen decent "shells" in Python, LISP, and custom stack languages.) Assuming the former, machine-readable output, concise mode (-q), and just outputting object URLs would be useful. Perhaps it should be called kubecli or kubecmd rather than kubecfg. Have you thought about how one might want to string commands together, such as to get info about all minions on which the pods corresponding to a particular service are running?

That said, some comments re. submit:

I'd rename "submit" to "reconcile" if that's what you want it to do, or perhaps just "up".

I could imagine other operations also being driven by configuration data, such as update, delete/down, or info.

It should be possible to select which objects are affected by kind and/or label selector.

The initial config sketch was in #1007. I'll try to get back to breaking it into smaller pieces soon.

This comment has been minimized.

I like the control of output that go templates offer - the only problem today is that they depend on the internal API instead of an API version you can specify. To me, that's better than porcelain or grep any day.

What we lack is filtering, which given a same label and field server implementation we could also provide on the client - the two should be roughly analogous. That seems a far more powerful solution (and fits with our API model) than some of the other proposed approaches.

This was referenced Sep 18, 2014

This comment has been minimized.

One additional argument for generating the rest interaction code programmatically: I want to add support for k8s plugins. Therefore, kubecfg needs to be able to deal with resource types that weren't compiled into it.

Basically, to increase readability and usability, I’m doing a number of things to the original parameters:

I’m shuffling them around (e.g. moving Name to be a first-level key).

I’m converting groups of maps or structs into strings (e.g. Labels or RestartPolicy).

I’m creating new keys by pulling in data from other REST calls (to get "Replication Controller” I query for any controllers that have selectors that match my labels).

But all told, what I’m really doing is creating a new Pod data model, which we can call KubecfgPod for convenience, that has all this information in a more user-friendly format.

Now, if all I’m doing is printing this out, this is fine. I could print out the above in the key-value textual output I have above, or I can print it in YAML or JSON, no big deal.

But if I want kubecfg to become more advanced with querying and filtering (as has been mentioned by @thockin, @bgrant0607 and @smarterclayton) or even more tightly integrated into config, this becomes problematic. What should people refer to, KubecfgPod fields or Pod fields? If they refer to KubecfgPod fields, we would have to publish it and evolve it side by side with Pod fields, and force people to potentially learn two conflicting models, making config management confusing. If we go with Pod fields, then what they see printed out and how they are querying are at odds.

The question is, how do we reconcile ease of use with the kubecfg tool compared to what the API is actually outputting and how config is structured?

Basically, to increase readability and usability, I'm doing a number of
things to the original parameters:

I'm shuffling them around (e.g. moving Name to be a first-level
key).

I'm converting groups of maps or structs into strings (e.g. Labels
or RestartPolicy).

I'm creating new keys by pulling in data from other REST calls (to
get "Replication Controller" I query for any controllers that have
selectors that match my labels).

But all told, what I'm really doing is creating a new Pod data model,
which we can call KubecfgPod for convenience, that has all this information
in a more user-friendly format.

Now, if all I'm doing is printing this out, this is fine. I could print
out the above in the key-value textual output I have above, or I can print
it in YAML or JSON, no big deal.

But if I want kubecfg to become more advanced with querying and filtering
(as has been mentioned by @thockinhttps://github.com/thockin,@bgrant0607https://github.com/bgrant0607 and @smarterclaytonhttps://github.com/smarterclayton) or even more tightly integrated into
config, this becomes problematic. What should people refer to, KubecfgPod
fields or Pod fields? If they refer to KubecfgPod fields, we would have to
publish it and evolve it side by side with Pod fields, and force people to
potentially learn two conflicting models, making config management
confusing. If we go with Pod fields, then what they see printed out and how
they are querying are at odds.

The question is, how do we reconcile ease of use with the kubecfg tool
compared to what the API is actually outputting and how config is
structured?

This comment has been minimized.

@ghodss This is a very nice proposal and thread. In OpenShift we are in the process of writing the very first skeleton proposals for our end-user CLI[1] and we would like to stay in conformance with the design and technology choices of kubecfg.

We are also considering codegangsta/cli for cleaner command definitions, specifically the clear separation of arguments and flags, default values and so on. It does appear to support global flags through func (c *Context) GlobalBool(name string) bool and similar, although I didn't test it. spf13/cobra on the other hand has a very nice sample project, like a reference implementation, with spf13/hugo.

One feature I didn't find in any of them is flags that accept a list of predefined values, for example something like: [--format=raw|json|yaml].

This comment has been minimized.

Quick note - in the new version of cobra, -h is reserved for help (which I think is the right behavior). Unfortunately this conflicts with our -h host param, so I changed it to --server|-s. Happy to take other naming suggestions as long as they aren't -h.

This comment has been minimized.

I thought about --master but I don't know if that's the right word for an end user command. --server seems more appropriate to me.

----- Original Message -----

Quick note - in the new version of cobra, -h is reserved for help (which I
think is the right behavior). Unfortunately this conflicts with our -h host
param, so I changed it to --server|-s. Happy to take other naming
suggestions as long as they aren't -h.

This comment has been minimized.

Yes. If we follow through with @smarterclayton's idea to make the creation of these commands more standardized and composable, we will pull out things like these long-help strings and embedded functions and make a cleaner way to define commands. I'll make an issue to track that refactoring.

This comment has been minimized.

create and update both take json as "-f " or a descriptor string or stdin - obviously they need the JSON body

delete takes json as "-f " or a descriptor string or stdin or this new "-i ," syntax (which is problematic because id/name).

get takes -f, but it doesn't mean file input (!!) or " [id]"

describe takes " "

Can we unify these more? Does delete really need to take json at all? I suppose that may be how we (eventually) do resource version or other preconditions? Do we need it now? I feel like that could be left on the table for now. If we change delete, maybe delete/get/describe can all use common syntax like:

This comment has been minimized.

For name vs id, I don't like having to always prefix with "name:" or "id:"... it feels clumsy for such a common operation. Can we use the @id syntax that @smarterclayton recommended? So:

[get | delete | describe] <kind> [@][<id-or-name>]

get pods
get pod nifty
get pod @1234-56-7890-234234-456456
get service @2345-56-7890-234234-456456
get nifty // automatically errors because can't use name without kind
get @1234-56-7890-234234-456456 // maybe we can support it in the future

We can use kubectl as a testing ground for the syntax and if it works elegantly, we can broaden it to be a more general way to reference ID's. (In the future the <type>:<value> style should be supported as well, we can even support name:, id:, label:, stuff like that. But we should ideally avoid creating that syntax quite yet.)

On the topic of -f conflicting, I actually think it makes more sense for "modify" commands like create, update, and delete, and "read" commands like get and describe to act similar to their respective groups. And I think it makes sense to be able to provide the json file that you just used to create a (named) pod to delete it as well. I'd rather make delete look more like its group than mix it with get and describe.

The only reason for the additional -i flag, which is the only thing that makes delete look different, is the idea that I wanted to make it convenient to issue a delete since that's the only "modify" command where it's possible you rm'd the json file you were using to make creates and updates before you sent the command to kubecfg. With the above change we could keep the -i flag format or we could autodetect whether your first argument is JSON or not - if not we fall back to get/describe semantics:

This comment has been minimized.

I'm OK with (name | @id) - I must have missed the proposal. Just to be clear, this is purely CLI sugar, we're not going to store the @ in etcd, right?

Regarding delete being more like create/update than get/describe - I see that they are all mutating ops, but I still find it weird. I can see an argument that a script might say "create X, do Y, if fail delete X", which is just plausible enough that I will not argue it further. But now you have need for the -i syntax which is unique to delete, which is sort of a smell.

This fixes an oversight: stdin as "-" is usually an argument to a file flag (e.g. tar -cvf -) and eliminates in-line JSON, which means we can have room for other non-flag args without auto-detecting. If we really want inline JSON, flag it, but do so everywhere.

[-f file | -s spec] # or something

You get your delete from a file and I get my normal syntax. echo "{json...}" | kubectl or kubectl <("{json...}") are not much worse than kubectl "{json...}"

So I switched back. My guess is once this bakes a bit, some form of it will be moved upstream into the client, and we can probably take advantage of some other mappings or data structure in the client package.

case "Pod":
I tried this, but it ended up being very duplicative, e.g.:

resourceMap := map[string]string{
"pods": map[string]string{
"path": "pods",
"kind": "Pod",
},
"pod": map[string]string{
"path": "pods",
"kind": "Pod",
},
"po": map[string]string{
"path": "pods",
"kind": "Pod",
},
}
So I switched back. My guess is once this bakes a bit, some form of it will be moved upstream into the client, and we can probably take advantage of some other mappings or data structure in the client package.

Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.