Had I been in a position of the security engineer for an organization I would definitely put a policy that only company computers shall be used. That does make sense, and protects not only company data but the liability of employees.

Yet, there is one case in which such a policy bugs me: A competent developer (I'm not talking about a junior developer, I'm talking about a middle to senior level developer) will potentially have on his work machine:

17 database engines;

20 docker containers;

10 test virtual machines (let's say using something like qemu).

That is a very common scenario in startups and post-startups (a startup that managed to survive several years). Moreover, this developer will be changing his docker containers and virtual machines every week, since he will probably be testing new technology.

Requiring this developer to refer to the security engineer to install new software every time is completely impractical. Moreover, since a company would have more than one such developer, going with the typical company managed computers for everyone involves drawbacks:

Maintaining the computers of, say, six such developers is a full time job for a competent security engineer.

The manager of those developers will be terribly angry because what his team is doing for 50% of their work-time is to wait for the security engineer.

On the other hand allowing the developers to use the machines freely is dangerous: one rogue docker container or virtual machine and you have an insider. I would even say that these developer's computers are more dangerous than that of a common user (say, a manager with spreadsheet software).

How do you make sensible policies for competent developers?

Here are some other solutions I could think of (or saw in the past), most of which were pretty bad:

Disallow internet access from the development machines:

You need internet access to read documentation;

You need to access repositories, often found on the internet.

Give developers two computers, one for internet and one for development machines:

Complaints about lost productivity: typing Alt+2 to get the browser is faster than switching to another computer;

Repository access is cumbersome: download in one place, copy to the other.

Encourages the developer to circumvent the security and make a USB-based connection between both machines, so he can work from a single computer (saw it happening more than once).

Move development to the servers (i.e. not development on desk machines):

This is just moving the same problem deeper, now the rogue container is on the server;

Arguably worse than allowing the developer to do what he pleases on his own machine.

Honestly I find not giving developers their own machine with full admin access completely illogical, and most of the big guys seem to understand this. I certainly wouldn't work for a company that did not. Just don't hire completely incompetent people.
– Alexander O'MaraAug 30 '16 at 16:00

34

That's the worst Security Engineer I've ever heard of. Sounds more like frantic tinfoiling. I hope this is just an extreme example that isn't really happening now. In fact, security at the expense of usability is not security. Getting in the way of the business, in the way of talent, is not security. Look into the CIA triad, in particular, the "accessibility" portion.
– Mark BuffaloAug 30 '16 at 17:28

85

Good luck finding devs that are willing to work on a computer they cannot administer.
– NavinAug 30 '16 at 19:14

9

A big uncontrolled space for containers and VMs is necessarily to get any development done; and it's a reasonable alternative to full control of the host machine. Contained web browsing is usually ok. Where you have devs wanting full control of the machine is where IDEs running in containers is too much of a pain or in VMs too slow. The idea that you will have only whitelisted binaries executing and opening ports is the opposite of what a developer needs. That is where the demand for control (often confused with security) creates an impossible situation.
– RobAug 31 '16 at 2:52

48

I shot down a job offer after finding out the company did not allow developers internet access. When a huge chunk of your day is "Researching why thing x did or didn't do action y", not having the internet is a ridiculous hindrance.
– Ethan The BraveAug 31 '16 at 14:34

13 Answers
13

It is usual practice to give developers local admin / root rights on their workstation. However, developers should only have access to development environments and never have access to live data. Sys-admins - who do have access to production - should have much more controlled workstations. Ideally, sys-admin workstations should have no Internet access, although that is rare in practice.

A variation I have seen is that developers have a locked-down corporate build, but can do whatever they want within virtual machines. However, this can be annoying for developers, as VMs have reduced performance, and the security benefits are not that great. So it's more common to see developers simply having full access to their own workstation.

We use and recommend an approach using what we call PAWs (privileged access workstations) or SAWs (secure access workstations.) Users who need access to prod have one workstation/laptop for daily use and work, and a second locked down workstation/laptop for production access.
– XanderAug 30 '16 at 17:21

6

@Xander - so you're a real life example of this "rare in practice" approach. Good on you! Can I ask what sector you're in? I've mostly seen this in government suppliers and aerospace.
– paj28Aug 30 '16 at 18:45

19

@paj28 I work for Microsoft. And yes, not only do we teach it, but we've fully implemented it here as well. :-)
– XanderAug 30 '16 at 18:51

3

I wonder. What if the development work is performed by the same people as the sysadmin work (another common scenario in startups). I guess every developer should get one PAW and one SAW.
– grochmalAug 31 '16 at 2:18

7

@JonathanPullano: There is no practical way to prevent a determined developer from walking off with the source code. You would have to cut all internet access and even perform x-ray body searches ( in and out) to prevent drives being smuggled through. You'd have to disallow all printing - and watch the trash bins, etc. Next, you'd need to ban all ways of taking pictures; which means no phones, cameras, etc while also using video cameras to actively monitor what everyone is doing. This would create an environment that few developers would be willing to work in.
– NotMeSep 2 '16 at 20:56

First, not having "local admin" rights on my own machine is a sign that I should look for a job elsewhere. It's nearly impossible to write code, fiddle with stuff, and maintain a toolchain if you have to ask permission every-time you need to update (or test out) a new dependency or tool. So, here are the permission levels I require. Keep in mind I am usually pretty high up on the ladder so to speak.

Total and complete Admin over my local machine

Total and complete Admin over all development and testing hardware

Some level of admin access to the production servers (this gets tricky I don't need or want everything, but I need enough to diagnose and fix problems that occur on production, and enough to actually deploy code (assuming that I'm the one that has to oversee code deployment). Usually this level of access evolves over time, but starts with log files.

Less then that, and you can go find a new developer.

That said, there is a lot of risk involved with that level of access. So what I normally recommend is a separate network. Put all the dev "stuff" in its own network. Its own AD, its own file hosting, its own everything, and never let it talk to the production network. (But do let it get out to the internet)

Yes this means duplicate hardware (or VPSs) but you need that anyway for testing. Yes it means a little bit more overhead when upgrading or administrating, but again, it's needed for testing. You also need a place to see "What happens to X software if I upgrade the AD server?" Look at that you have an entire network of test machines ready for that kind of test.

What I have successfully implemented (with the help of a good IT team) is a separate VLAN for all dev "stuff" and a single VPS host that dev has full access to, to do with what ever it wants. On that host is an AD server that is setup by IT to look like a smaller version of the companies AD server. Then a set of documents and guidelines for what a, for example, webserver should run. What a DNS server should run, what a xyz server should run. Then, part of "development" is to install and configure those VPSs for our use. Everything on that VLAN is totally isolated from production, and considered external to the company. Finally a set of "punch throughs" are created for assets that we did need access to (like email). Normally this was handled as if we were external, and the list of these tools were very small.

Sorry, but you can't have "Some level of admin access to the production servers". The best I'll let you have is you get on a WebEx (etc.) with me or shoulder-surf at my desk, and you tell me what you want, then I give it to you. Developers don't get to touch QA or production servers. You give us admins the code, we deploy to QA, then let the testers beat hell out of it before doing a production change. If I let a dev touch one of my QA or Prod servers, he's likely to make undocumented changes I can't replicate.
– Monty HarderAug 31 '16 at 17:00

6

The reason why this extends to QA is that the QA environment must mirror the Prod environment as closely as possible, so that any hard-coded ASS|U|ME-tions that work fine in Dev but will break in Prod will first break in QA so that I can tell you to fix that mess and try again before it ever gets to Prod..
– Monty HarderAug 31 '16 at 17:04

5

Monty's philosophy is key to keeping a pristine production environment. The key word he used was "replicate" - often times, devs (I know devs: I was afflicted with development for over ten years and am slowly being treated for the condition, though I occasionally relapse) will find it's broken and fix it. Yay they fixed it! But how did you fix it? What else did you break? Nuh-uh. Getting back up ASAP will cost us more than getting us back up with a defined change plan. Admittedly, these practices are only tangentially related to "security" but they're still very important.
– corsiKaAug 31 '16 at 18:46

1

@corsiKa I can argue that separation of roles is a security matter. Just because the developer isn't a malicious attacker doesn't mean he can't do damage to my production servers on the same (or worse) scale as that attacker. The central point is that it enforces a true knowledge transfer from dev to admin. He can't "just know what to do"; he has to document it in writing so that any member of my team will be able to read "what to do".
– Monty HarderAug 31 '16 at 20:52

5

I believe the PCI-DSS spec says that devs don't get access to production. We had a separate team that accessed production and committed changes back into the code base. If you are in a PCI-DSS environment (processing credit cards payments, for ex), you would probably not get prod access.
– lsdSep 1 '16 at 11:52

Your job is to prevent change (known bugs and vulnerabilities are better than unknown, right?), but mine is to change things. This puts us at an impasse. My job is to create/change things. If your policy prevents that, then, like any other obstacle, a part of my job is finding a way around that.

Which do you think is more of a danger, a developer that you've granted access to the things he needs to do his job, or one who has obtained that access by learning how to circumvent all of your defensive measures? And why is your company paying you and your developers to fight this war against each other when you should be working together?

The simple answer is, give developers access to what they need to do their jobs. And talk with them. There may be rare conditions (clean room reverse-engineering with major legal consequences, or handling top-secret classified government data) where you need a more complicated policy. But for the most part, it's not that big of a deal. If you start a war and make your developers enemies, they will either leave or become much more dangerous than if you work with them.

Sensible measures include not allowing production database dumps on dev laptops (only testing databases with bogus data). That way, if the laptop gets stolen, no confidential data lost. But if the dev needs to debug things, they still need access to copies of the production database somewhere in a controlled environment.

Restricting internet access is ridiculous. You might as well require all of your developers to write their code on paper with a feather quill and ink.

Talk to your developers, find a way to give them what they need to do their jobs while maintaining the security that you need to keep the important data secure. The details will depend on your company and what data you're dealing with. But it isn't likely to need draconian measures.

Except it's the Business that defines your role and restricts your tools according to their needs and goals. If they want you to use pen and paper, then that's their call.
– schroeder♦Sep 1 '16 at 6:25

7

@schroeder - and if you want to find another job, then that's your call.
– superluminarySep 1 '16 at 10:33

6

@schroeder True. Businesses are entitled to be as stupid as they want, but stupid businesses shouldn't expect anything but stupid developers and a lot of broken software.
– jpmc26Sep 1 '16 at 23:24

2

If a stolen laptop is a concern, shouldn't the disk be encrypted?
– AndySep 3 '16 at 1:08

A security engineer doesn't maintain computers, that's what the service desk does. In your case you will require him to install three tools:

a hypervisor

docker

database software

From there he can add and remove machines for development as much as he wants (shouldn't require a sec engineer to intervene). With regard to your "rogue container". In general you don't deploy containers to another server, you deploy docker files which pull code from a code repository or download a signed binary of compiled code (which is even safer).

In terms of rogue I can only imagine an attacker gaining access and adding more code. This is why you need to have security embedded at every step of the SDLC to ensure that all code is at least reviewed or gone through by another dev before pushing it up the tree. Furthermore you can also integrate code scanners to automatically scan for suspicious or vulnerable code stubs.

On your third point it actually depends. Companies like Riot Games are doing exactly that. They found out that limiting intelligent individuals will lead those individuals to circumventing controls. So they decided to use simple rules and effective awareness training to make sure that they keep security in the back of their mind and gave full administrative privileges. They handed out little cards which stated what they should take care of and be careful of.

I need to say that I really like the idea of providing training instead of locking computers. Good developers are interested in technology, and a good training in how security threats may affect their machines may just be just the right thing to catch their interest. The tricky part is how to construct this training, it certainly cannot be the same training as the one provided to other parts of the company.
– grochmalAug 31 '16 at 1:12

3

Do you have a citation for your Riot Games comment? I suspect itll be in their engineering blog
– Dan PantryAug 31 '16 at 11:15

18

Signed binaries for containers? Haha, I wonder how many people are doing that in practice. I bet most containers involve downloading a shell script from a random third party website over plaintext HTTP to setup some kind of new-fangled node.js mess.
– Matti VirkkunenAug 31 '16 at 11:22

@DanPantry part of the talk they gave at Brucon
– Lucas KauffmanAug 31 '16 at 13:02

We give our developers admin-access on their computers and let them know what the rules are. Most run Linux but we have a few devs on Windows as well. They all know to store their documents, designs, etc. on our fileshares (that have more specific permissions) and push their source code to our Git server.

They also know that I won't spend much time to fix a problem with their OS. If their computer malfunctions we will often just wipe it and reinstall the OS. The common applications and settings are automatically installed via Puppet and they'll have to do the rest themselves. Most of them have a Git-repo with dotfiles (the settings and preferences for their environment).

The time that they lose in such a case is enough motivation for most of them. They like to get work done, instead of fiddle with fixing their OS. If they lose important work because it was stored only locally they'll be frowned upon by their colleagues and boss.

We don't block any website or applications (except for a DNS-based anti-malware filter) but have some company policy rules about things like illegal software. We rely on peer pressure for most things. People who spend their time on Facebook aren't productive and don't last long. Much of our policies are based on an honor system and that appears to work well.

I wish more people had so much confidence in their developers, since confidence is a two sided relation. Of course, that requires a very good hiring process, you simply cannot afford to hire incompetent developer. Then again, you should always aim to hire only competent developers.
– grochmalSep 1 '16 at 22:11

This fundamentally depends on context. Consider the following two setups, both of which I've encountered in the wild:

Developers work on machines that they either own, or have complete access to, even including installing their own operating systems. There are no restrictions on how they write the application. They have SSH access to production systems whenever they're connected to the network, including VPN.

All development is done in an air-gapped development lab, that developers are not allowed to bring electronic devices into. All computers have all permitted software preinstalled. Deployable artifacts are securely delivered on physical media to another team, who are responsible for deployment.

Both of these setups were appropriate in context - taking into account the threats the organisation faced, and the needs of the project.

There is a fundamental tradeoff here. Any move away from the first possibility, towards the second, reduces productivity. Anyone who has any control over the application is in a trusted position. Anything that you can't trust them to do, is something that you'll have to either do without, or create a handover process for someone else to do. You can't revoke trust without reducing productivity.

It depends on the kind of software you're developing and the size of your organization.

For a single rockstar developer's workstation in a small company, I would use an executive level security exception. The risks (including the annoyance of IT, executives and fellow developers) would have to be discussed, but ultimately, if the rockstar doesn't get his way, he's probably going to move to another company. These guys do not tolerate friction, if they encounter it, they can easily find more interesting places to work.

A more typical scenario than an uber-workstation is to have a development cluster where operations manages the life and death of the VMs, the environment is monitored with IDS/IPS and Internet access (on the development cluster) is limited but opened as-needed. e.g., for Documentation... nothing wrong with whitelisting every technology source related to your development effort. Developers can't pull in code willy-nilly anyway. They need to document their sources and verify weird licenses.

If you can get the ear of the rockstar, he can help push the requirements to ops and executives and architect the cluster and processes, and educate the development teams on the need.

If there's a budget and the developer is reluctant... then IMHO, you're not dealing with a rockstar, but a diva and their whining should be carried right up to that executive risk signoff.

The hard part becomes managing machine lifespans and making sure developer afterthoughts don't become "operations-like" developer-production systems. But that's much easier than VMs on developer workstations.

"nothing wrong with whitelisting every technology source related to your development effort"... You'd spend all your time appending websites to the whitelist, until the dev decides to find a place where there's less friction.
– Lucas TrzesniewskiAug 31 '16 at 14:19

Give your rockstar a separate admin account to be used solely to administer his workstation. That way it's isolated from the account he uses to browse SE (and God knows what other sites, some of which might carry malware).
– Monty HarderAug 31 '16 at 17:06

1

@LucasTrzesniewski - If a developer can't be bothered to document the sources of their code, I'm happy to see them go. The good developers I've worked with tend to prefer to know that their peers aren't pulling in stuff from random git repos with dubious histories and unposted licenses. BTW, I am talking about development platforms here, not their workstation.
– mgjkAug 31 '16 at 21:01

3

That being said, developers tend to use a lot of libraries for any non-trivial project, and (that's important) they have to try even more libraries before choosing the ones they'll be using in the end. I'm all for having clearly defined and reviewed code dependencies, but access to development resources on a case by case basis would be a major PITA and an impediment to the development cycle.
– Lucas TrzesniewskiAug 31 '16 at 23:46

1

"developers usually don't use GUI access to those machines" This only applies in the Linux world.
– jpmc26Sep 2 '16 at 0:03

This question raises a number of interesting questions and challenges. As
someone who worked as a developer for many years and then moved into security, I
can appreciate many of the arguments and points of view expressed in the
responses and comments. Unfortunately, there isn't a definitive answer because
the correct solution depends on context, risk exposure and the risk appetite of
the organisation.

Security is not something you can measure in absolutes. Any time you come across
a security person who constantly talks in absolutes and insists on enforcing
specific policies regardless of anything else is either inexperienced or
incompetent. The role of the security engineer is to facilitate business
processes, not impede them and to ensure business decisions are made which are
informed by the relevant security risks. A good security engineer will know that
any policy which prevents staff from doing their job will inevitably fail as
staff will find ways to work around the policy. More often than not, these
work-arounds will be even less secure. It is the responsibility of the security
manager to understand the various roles within the organisation and ensure that
policies are structured in such a way that they not only support what the role
requires, but encourage good practices and discourage the bad ones. This can be
vary difficult, especially in large organisations. At the same time, developers
and others within the organisation also need to recognise that they may not have
all the relevant information to understand why certain policies are
required. Too often, developers and others see the security engineer as an
obstacle or interfering bureaucrat who doesn't understand what they need to get
their job done.

More often than not, the way to address these issues is through communication. I
have frequently had developers come to me frustrated because some policy which
they see as pointless is getting in their way. Once we sit down and discuss the
issue, a number of things typically happen;

The developer has either misinterpreted the policy or has read into it
assumptions which are incorrect and given the impression the policy prevents
something which it doesn't. This will often result in a review of the policy
to improve clarity

The security engineer becomes aware of a legitimate business requirement which
needs to be satisfied in a way which maintains adequate security
controls. This will likely result in a review of the policy.

The developer becomes aware of some other requirement or risk which were not
obvious to them initially. This often results in the developer identifying
alternative solutions to satisfy their requirements

An issue is identified which cannot be resolved to the satisfaction of either
party in a manner which is within the accepted risk appetite of the
organisation (i.e. accepted levels of risk). This situation will typically
result in the issue being escalated to the executive level for a decision. The
difficulty of doing this will depend on the size and structure of the
organisation. Once the decision is made, both the developer and the security
engineer need to work within the parameters set by the executive to find the
best solution they can.

There has been a number of responses who are critical of policies which result
in an adverse impact to their level of productivity. While any developer should
raise such concerns, at the end of the day, they either must accept whatever
decision the executive makes or look for an alternative employer. To assume you
know better or that you have some special right to ignore the policy is arrogant
and dangerous for both you and your employer. If you are convinced you are
right, then you should be able to convince management. If you can't, either your
not as good as you think, lack adequate communication skills to present a
convincing argument, don't possess all the information or are working for an
incompetent employer. After 35 years in the industry and despite what Dilbert
may lead you to think, the latter is not as common as you may expect. The most
common source of conflict in this area is due to poor communications and lack of
information.

A common failure amongst developers (and one which I have been guilty of) is to
focus so much on your specific task or objective that you miss the bigger
picture which team leaders, managers and the executive need to also
manage. Environments where developers have been given a lot of freedom and
trust which has resulted in high levels of productivity, but it has
resulted in other problems - the situation where a
key developer has left or is off sick for an extended period of time and nobody
else can pick up their work because it is all on their uniquely configured and
managed desktop or trying to debug a difficult issue which cannot be easily
reproduced due to lack of standard setups/configurations, or dealing with a
possible security breach due to a developer accidentally leaving some service
running which was under development and lacked standard security controls. Being
a developer focused on a specific domain or technology does not also guarantee
expertise in everything else. Some of the best developers I've ever worked with
have been some of the worst when it comes to security and/or management of their
environment. This is probably partially due to the focus on a specific problem
and more likely simply due to capacity. None of us have the capacity to be
across everything and we need to recognise that sometimes it is important to
defer to those who specialise in areas we don't.

The Changing Environment

One of the main reasons for policies which restrict what can be done on the
desktop is due to an underlying assumption that desktops within the local
network have a higher level of trust than desktops outside the
network. Traditionally, they are inside the firewall and other perimeter
defences and have move access to information resources. This means they pose a
higher risk and therefore need more security controls. Other reasons for
restrictive policies/practices include standardisation of environments, which
can reduce support costs and increase consistency (which was especially true
when there were more applications where were platform/OS dependent - remember
all those horrible applications which needed AtiveX or a specific version of
IE).

The growth in virtual machines and containers, cloud services and commodity IT
services and increased network capacity is resulting in a number of changes
which will likely make many of the issues raised in this thread irrelevant. In
particular, the move towards zero-trust networks will likely see significant
changes. In a zero-trust network, devices inside the network are not seen as
having any special additional level of trust compared to devices outside the
network. You are not provided with the ability to access resources simply
because you have the right IP address. Instead, trust is based more on a
combination of user and device information. The policy which determines what you
can access is determined by your credentials and the device you are connecting
from. Where that device is located is irrelevant. This is also a model which
fits far better with the growth in BYOD and the increased mobility of the
workforce and growing demands to employ staff based on sills/ability and not
geographical location.

Once you remove the level of 'trust' associated with the device, you don't
require controls over what you can apply to the device. You may still require
devices to support specific profiles - for example, you may refuse allowing
anyone to access your resources if their device is running Windows XP
etc. However, you are less concerned about the device. Likewise, you won't be
doing as much work directly on the device. To some extent, the device will be
more like a thin client. You will use VM's and containers hosted remotely. This
won't in itself solve all the problems and may be seen as just moving some of
them from the local desktop to the remote VM or container. However, once you
combine this with various DevOps style orchestrations, many of the reasons
developers may need enhanced privileges are removed. For example, you may not
require any access to the production systems. Promotion of new code will be
handled through an orchestrated continuous integration system and when you need
to debug an issue, you will be provided with a VM or container which is an exact
clone of the production system.

None of these changes will magically solve anything, but they will provide a
wider range of more flexible tools and solutions with potentially less
complexity. Lowering complexity will make security management much
easier. Easier security management will lead to less unintentional or
unnecessary restrictions or impediments to performing work duties. However, at
the end of the day, the key requirements are

Recognition by all that one size does not fit all. Everything needs to be
evaluated within the context of the organisation.

Willingness to put the needs of the organisation first.

Good bi-directional communication.

Willingness for all parties to work towards solutions which are mutually
acceptable and

Willingness for all parties to compromise

Willingness to work within the system and adjust your workflow or preferred
way of doing things to fit with the requirements of the organisation

And let's not even start on how zero-trust networks are important when IoT devices are around. That "mug that tweets" is often so badly constructed that it breaks any sensible security measures. I just wonder how the user and device information will be managed to give access to a system. Passive fingerprinting (akin of openBSD's PF maybe?)
– grochmalSep 2 '16 at 16:01

Yes, IoT is a concern and is certainly something which will spped up the move to zero-trust networks. The challenge is user authentication - how do we make it both usable and secure and avoid 'honey pots' of valuable data. Some sort of bio-metric input almost certain. Authn is the holy grail for ICT in the same way power storage is for solar energy. Once we crack those nuts, a lot will change.
– Tim XSep 4 '16 at 1:14

The problem is worse than you think

There is no spoon

Development is not about software. Development is about making or improving products, services and processes. Software is an important gear but is not the only one. Good developers will define the processes in the wider sense to know which software components to create as well which human, logistical, risk-management process to propose. Does not make any sense to develop a software system that depends on human, logistical and paper processes that are not implemented.

There are no rules to the development because the development is defining the rules. That is what makes development the worst environment to secure.

But that does not mean some controls should not be established. On the contrary, many controls should be set up by the development team itself.

Engineering process

There are companies that advocate separation between business and technology
in a top-down process. This is actually very popular because suggests that business people with no technical knowledge should be on top. Lazy people love that. But in engineering a top-down design simply does not work. Feyman (1985) made a nice article about that in the presidential commission that analyzed the Challenger explosion. Basically engineering top-down processes eventually break the company. And my market experience reinforce this understanding.

The Challenger explosion is an great example. Nasa managers testify on camera on a inquiry commission that they developed a rubber that can remain flexible under 60 degrees below freezing. A thing that was contradicted by an simple high-school physical experiment made by one of the commissioners (put the rubber component in ice water pressed by a clamp). This was a great example because this rubber component needed to be flexible for the booster not to explode; since it needed summer temperatures to do that, the booster only worked in summer. A characteristic of a single component defines a visible characteristic of the entire product that is very limiting.

Engineering should happen bottom-up because you need to know the limitations and weaknesses of each component to elaborate processes to mitigate them. More often than not, the mitigation processes are not software and will affect the cost of the product. Meaning that the characteristics and limitations of the individual components will define the characteristics of the products, services and processes.

Top-down processes are fundamentally broken. Many companies that adopt this philosophy on paper still have some market success. But when you go down and investigate their big and most successful projects, you learn that they were conducted outside the normal company rules. The biggest successes are attained when one person that have deep engineering knowledge and market-wise vision is informally empowered. Since this happens informally, the management thinks the top-down process works. They brand that all other teams are incompetent. Turning a blind eye for the fact that the initial project outline when it left the "top phase" was completely ignored and does not describes the products, services and processes built.

Your manager can define that you will engineer a teleport device by the end of the month because he concluded that this will allow high profit in the travel business... but that will not make it happen. Top-down projects are like that, set expectations that are not technologically sound.

Do not get me wrong, it is good to see the problem for many angles. Bottom-up, Top-down, SWOT and more are healthy for the analysis, but the genuine engineering effort is bottom-up. There is no genuine goal without technical viability.

Development Security

We got to remember that software developers will change the company software on a regular basis, and, that way, they can: change what appears on the screen of anyone, send automated e-mails to anyone including themselves, or open back-doors to do what they want. Meaning that a criminal hired as a developer can do significant damages to the company.

Worst than that, there are many companies that do not enforce the a source code repository provenience, and then the hired developer can deliver a binary that is different from the source given. This allows criminal developers to hijack the company systems, if they are not hired, the things will stop working soon.

To me development security should focus in:

Source code version control: to ensure that the source code and third part components needed are stored in a secure location.

Strategic division of labor: junior and temporary developers must have limited access to the source code and data. They only need access to the components they are changing to avoid a junior developer being able to understand the inner workings of all systems and be able to exploit that.

Trust the core developers: Senior/Core developers will have to know everything, have access to everything, to plan and distribute the tasks and diagnose severe problems. This core must have access to the whole thing, both in the development and production. They are your partners in the development of the security policies. We must accept that the core developers sort-of own the company. My old boss used to say: "we are lucky Lucas is on our side, on the other side he would destroy us". Core developers can do a lot of damage if they want and there is no firewall and production control that can prevent that.

Separate the environment through firewalls: separate your development network, from your test network, from your production network. On a company I defined the network 10.1. as development, 10.2. as testing and 10.3. as production. The 10.2 and 10.3 networks only receive code through the corporate CVS and build them automatically upon the admin command. Although it was a small startup and I was in the production and in the development teams, I made some bureaucracies to avoid my own mistakes (developers can be your best allies). I also did change the terminal colors by network: when connecting in a production server the terminal background was red, testing was yellow and development green. Since all my servers used the same configuration it was easy to confuse them if the prompt was open. To my experience most problems come from badly tested software and new software installations. To make clear: where you are is a powerful security feature in my opinion. It has nothing to do with access, but it is security.

Hire an skilled test developer: The key aspect in testing is to have large amounts of good simulated data that is meaningful to the problems that the company face. Monte-Carlo simulations are good to generate large datasets that has meaning to other developers and can lead to stronger and resilient software and processes. To me there are no "production" failures, the developer is always to blame. The maintenance tasks and contingencies have to be written. Software has to be resilient.

Source code review: have people to review the source code before accepting the modification. All projects should be branched on the source code control and the merge should be reviewed. The source code review should only bother with malware detection, access escalation, access profiles and a good explanation of what the source code means and should do. The quality of the code will be assured by the testing, not the source code review. You can see this in action in most open source projects.

Test policies tests are much more a corporate culture than a framework. Some companies adopt market frameworks, do some testing, but the quality of their code is bad. That happens because you need people capable of engineering meaningful tests. Actually the development must become test driven. I know no other secure way of development. And a curious thing is that humans, purchases, and consulting all have to be tested also. Vendors often claim their products perform flawlessly, but I did not found a flawless product yet.

Policies are meaningless if not monitored. One company I know have a bureaucracy that every database table should have a description on the attribute level. 95% of the attributes are described as "the ${attribute name} of ${table name}". It does not explain what the attribute really is, what values may hold and stuff.

Appropriated compensation and work environment. To have good developers, both in skill and personality you have to have good compensation policies. Money is important, of course, but is not enough. You also need to have perspective/stability, true recognition and a good work environment. For example, instead on a development office in New York where people live in small apartments, you can choose a smaller city where the same compensation allows for a bigger house and more proximity to the work. Bigger computers, good laboratories are also a plus for technology enthusiasts.

Data security many activities require sensitive production data, and should be accessed in a special lab. Unless you information is public or not sensitive, maybe the best policy is to put labs in good neighborhoods with controlled physical access. Allow only some simulated data to be put on personal laptops and components that are not sensitive. That is possible. For example, I developed an 4.5 billion records heavily accessed data archive for a company. I did in my home and used absolutely no company data to that end. When I submitted the code it worked as expected in the first attempt. Other than hardware failure and migration of production environments we have 100% of availability in 10 years. The risk of the developer take the source code with him is not relevant. This particular system took me 3 months to develop, a great deal of this time was to understand the performance limitations of the components. This is now knowledge inside my head. Even without the source code I can re-develop this solution in about a week now.

Strong logs are important to know everyone that did something. The best here is for the logs to be generated by a framework, logging for short time detailed screens, for longer times access and activities and even longer the corporate logs. My critical systems logs for every time a screen is accessed (including the design of the screen). Some of the critical resources should be logged by a trigger on the database itself and the critical tables or resources should be flagged for source code auditing.
-Log screening is difficult to do by hand. Learn how to make filters on the logs to see critical things. One very useful filter is to cross complaint reports with user access and activities. If you have good enough logs you will see coincidences. For instance: before a problem user1 always logins.

About not accessing production

The rules that require the developers not access production systems as users are there to avoid developers from submittin code to show to his/hers own user privileged information. I think this is a very, very weak security measure and easy to detect in source code auditing. There is several easy ways to circumvent that:

a developer plus one low paid employee;

send himself an e-mail;

open a back-door in the system.

Source code auditing looking for backdoors, access escalation and malware seems more productive. It allows to identify the bad developers when they are testing their exploits and fire them. Of course a skilled developer can hide an exploit in plain sight, therefore it is important to use languages and variable names plain and clear. Only resort to weird solutions in documented points of the applications that need special performance or obfuscation. For example, 1 << 4 is the same as 1 * 16. This only would made sense if the language does not make this optimization by itself and the point where it happens is a performance bottleneck. Too symbolic languages are bad for this very same reason. Source code should be readable by any geek.

The problem is worst than you think

The easiest and worst damages a developer can cause are not related with tool installation. Even if the development environment is managed, will make little difference if the company does not have strong firewalls, source code repositories, builds based exclusively on the source code repositories and code review.

It was a lot of effort to write that (+1 for effort), but I need to argue that you're lucky that I'm fluent in portuguese. I saw this kind of concordance mistakes over and over, I probably fixed most of them. The answer starts pretty well but then goes downhill with the source code stuff, and then improves a little at the end. Yet, most importantly, you forgot the main issue why people use company provided computers: what if a developer gets a trojan? 10.1, 10.2, 10.3 subnets will certainly not protect you from a competent attacker in that scenario.
– grochmalSep 22 '16 at 2:11

Thanks grochmal. That happens in portuguese too, my first drafts tend to be a little confusing in some parts because I write them too fast. Thanks for the edit.
– LucasSep 22 '16 at 8:46

The network separation is to have exacly the same configurations in both networks except for the second number of the IP. Predictability allows better firewall rules and better scripts to move things arround. Sort helps against trojans in the test and production networks if combined with linux servers and policies to avoid external components.
– LucasSep 22 '16 at 8:49

A trojan is not a developement specific risk, if the developer has the access to screens of your application a trojan will have that. Same as any other user. To mitigate the risk of trojans I personally make my component prospection inside a VM.
– LucasSep 22 '16 at 9:07

As a consultant I have worked for many different companies, and only 2 of them did not grant developers admin access to their machines; one was a small company, the other was a very large company.

At the small company I requested admin access, explained why for about 60 seconds, and was given it right away.

At the large company I requested admin access, and was told that even though they agree I should have it, company policy forbid it and they could not change it. So one of the IT guys came over and created a local administrator account on my machine and had me set the password for it. From then on anytime I needed to be an admin I could login to the machine as the local admin, or simply use runas to startup Visual Studio or whichever service/application I needed to run with an admin user (IIS, for example). This isn't much worse than choosing the built in "Run as Administrator"; it's just slightly slower, though fortunately doesn't come up too often.

This is clearly a "letter of the law" solution designed to bamboozle idiot rules makers. Good news is you can get work done, bad news is that if the idiots acquire a clue, someone gets fired.
– ddyerSep 1 '16 at 3:33

I had a job briefly where you had no admin access to your computer at all. To get any software installed, you had to put in a ticket, hope it was on their approved list, or justify it in some fashion if it wasn't, and then wait for their security team to remotely log on and install it for you.
– lsdSep 1 '16 at 11:57

1

I heard believable rumors of an environment where the standard procedure in devops was to immediately buy more RAM and then P2V the original system image within itself. The inner machine only gets used for company email; everything else is on the outer machine. They ran cables between the cubicles for their own LAN.
– JoshuaSep 2 '16 at 21:43

Block access to github, codeplex and all 150 other code-sharing sites at the network level

Block access to all 9000+ file-sharing sites at the network level

Block access to P2P...etc...

My employer does all of this and more. Quite a few colleagues end up developing on their own personal equipment after-hours at home anyway and throwing it back over the fence through side channels just to get around the crippled infrastructure. Personally I'd rather spend my evenings drinking and dreaming of overdosing on dog tranquilizers than waste an additional undocumented 20 hours doing something that would get me fired anyway.

We consider computer science (and by extension, software engineering) a science. Scientific experiments are conducted in sterile conditions to keep variables under control. It is not scientific to develop in a crippled environment where things do not perform as expected due to environmental factors. I can tell you from experience that you don't get well-engineered software out of it...everything developed in-house is broken, which leads to more spending on third-party solutions.

Developers absolutely must have a sterile environment to develop in. I can't tell you how many times various security controls have introduced gremlins into dev and production architecture-- thus a managed, sterile, cloud-based dev environment is really the only way to go. "It worked in the lab, the problem must be something external."

You just need to make sure the VMs you let them provision aren't anemic and you need to automate the provisioning process a la OpenStack or AWS so devs are not waiting for days to satisfy scaling needs. Having them all centrally managed gives you a great deal of auditing control, and frankly gives the devs better performance than they'd get on their own equipment anyway.

Allowing company machines with proprietary information to be internet connected opens the door to hackers. It also makes it much easier for employees to exfiltrate company data for illegitimate purposes.

If the employee has to use a special machine to access internet resources, it creates a controlled path for infiltration and exfiltration of data that is much secure. Also, it discourages employees from idly browsing the web or wasting time on internet forums, like I am doing now.

As an admin, this problem is resolved completely and utterly through a SaaS approach. Virtualize all your different environments, put them on a server rack appropriate for the number of clients, and configure some remote access (RDP, Terminal Services, or other...). Now everything is on the server, and you only have one of each environments to maintain for everyone.