Open Issues

Please fill out the issue checklist below and provide ALL the requested information.

[ ] I tried updating to the latest version of the CF CLI to see if it fixed my problem.

[x] I attempted to run the command with CF_TRACE=1 to help debug the issue.

[x] I am reporting a bug that others will be able to reproduce.

Describe the bug and the command you saw an issue with
This is a re-submission of the issue reported in #1366

We see the same issue regularly in our ci pipeline when testing our EFS volume service broker.

It appears that cf services is requesting a list of service instances from the API, and then making a set of subsequent API calls to get more details for those instances without appropriate error handling for the case where a service instance was deleted between the two api calls

What happenedcf services returns an error:

The service instance could not be found: a2560bc8-d7d9-4900-8955-3f43593dbcdf

Expected behaviorcf services should omit the service instance and return OK if the service instance got deleted while the cf services command was processing

To Reproduce
Steps to reproduce the behavior; include the exact CLI commands and verbose output:
1. register an asynchronous service broker that takes some time to create and delete services.
2. create a service instance
3. wait for it to exist.
4. delete the service instance
5. call cf -v services in a tight loop

This is a race condition, so it will only fail if the delete operation finishes while cf services is processing, but when we run our test suite, it fails about 25% of the time, which I think translates to something like one out of every 20 service instances.

API dumps are included below.

Provide more context
- platform and shell details: Ubuntu 18.04.1 LTS (running in concourse)
- version of the CLI you are running: cf version 6.40.1+85d04488a.2018-10-31
- version of the CC API Release you are on: 2.126.0

Note: As of January 2018, we no longer support API versions older than 2.69.0/3.4.0 (CF Release: 251 / CAPI Release: 1.15.0)

API dump for 2 consecutive invocations of cf -v services. The first one succeeds, but the service is still in the "delete in progress" state. The second one shows the error.

Please fill out the issue checklist below and provide ALL the requested information.

[X] I tried updating to the latest version of the CF CLI to see if it fixed my problem.

[X] I attempted to run the command with CF_TRACE=1 to help debug the issue.

[X] I am reporting a bug that others will be able to reproduce.

Describe the bug and the command you saw an issue with
On plugin uninstall, the CLI invokes the plugin with special args to notify it of the uninstall. If a plugin which blocks (e.g. waiting for user input or for streaming data) does not handle this case, it will be impossible to uninstall via cf uninstall-plugin.

Expected behavior
The CLI should always be able to uninstall a plugin. Maybe it should take a force flag.

To Reproduce
Steps to reproduce the behavior; include the exact CLI commands and verbose output:
1. Build the following plugin:
```go
package main

A quick look through the test set up leads me to believe that this suite has a bunch of race conditions around the shared server.

In a SynchronizedBeforeSuite, we set up one ghttp.Server: https://github.com/cloudfoundry/cli/blob/master/api/cloudcontroller/ccv2/cloudcontrollerv2suitetest.go#L29

In a BeforeEach, we reset the state of the Server: https://github.com/cloudfoundry/cli/blob/master/api/cloudcontroller/ccv2/cloudcontrollerv2suitetest.go#L40

However, this is inherently racy because the server is being used in parallel by multiple nodes, which can interleave VerifyReqests with incoming http requests from different tests.

In the case above, we verify a GET: https://github.com/cloudfoundry/cli/blob/master/api/cloudcontroller/ccv2/applicationinstancestatus_test.go#L137

But there is almost certainly some other test running in parallel that is issuing a PUT.

This (or similar failures) were quite easily reproducible by navigating to the ccv2 package and running:

ginkgo -nodes 16 -untilItFails

It took maybe up to 50 attempts to fail reliably.

Suggestion

The cost of setting up ghttp server is almost negligble. I noticed practically no difference in the test run time by moving the SynchronizedBeforeSuite to BeforeEach and symmetrically on the AfterEach, and running the above command didn't fail in several hundred runs.

Please provide details on the following items. Failure to do so may result in deletion of your feature request.

What's the user value of this feature request?
Ability to have two separate cf cli environments without conflicting global state

Who is the functionality for?
Developers working on multiple projects simultaneously
Automated scripts that run cf (possibly conflicting with other executions or interactive use)
CI environments that build and deploy in parallel

How often will this functionality be used by the user?
As needed; most likely written into scripts that use cf.

Who else is affected by the change?
Default behavior will be unaffected, as the ability to isolate instances will require an explicit flag or environment variable

Is your feature request related to a problem? Please describe.
I'm using a CI server to push my code to multiple spaces, however doing this in parallel often fails depending on the order that the cf commands run, e.g.

Describe the solution you'd like
Some way to specify a different "instance" of cf. One easy way would be to allow the user to set a custom CF_HOME environment variable which would be used to locate the config.json file.

Describe alternatives you've considered
Another solution would be to add an option to specify the config.json path, which is already supported for some commands like cf create-service:

OPTIONS:
-c Valid JSON object containing service-specific configuration parameters, provided either in-line or in a file. For a list of supported configuration parameters, see documentation for the particular service offering.
However, the -c option is already taken for some commands (e.g. cf push)

A workaround for a CI case is to use completely isolated build agents, though this adds extra overhead and build time.