Currently, I am apart of a health care project whose main requirement is to capture data with unknown attributes using user-generated forms by health care providers. The second requirement is that data integrity is key and that the application will be used for 40+ years. We are currently migrating the client's data from the past 40 years from various sources (Paper, Excel, Access, etc...) to the database. Future requirements are:

Workflow management of forms

Schedule management of forms

Security/Role-based management

Reporting engine

Mobile/Tablet support

Only 6 months in, the current (contracted) architect/senior programmer has taken the "fast" approach and has designed a poor system. The database is not normalized, the code is coupled, the tiers have no dedicated purpose and data is starting to go missing since he has designed some beans to perform "deletes" on the database. The code base is extremely bloated and there are jobs just to synchronize data since the database is not normalized. His approach has been to rely on backup jobs to restore missing data, and he doesn't seem to believe in refactoring.

Having presented my findings to the PM, the architect will be removed when his contract ends. I have been given the task to re-architect this application. My team consists of me and one junior programmer. We have no other resources. We have been granted a 6-month requirement freeze in which we can focus on re-building this system.

I suggested using a CMS system like Drupal, but for policy reasons at the client's organization, the system must be built from scratch.

This is the first time that I will be designing a system with a 40+ lifespan. I have only worked on projects with 3-5 year lifespans, so this situation is very new, yet exciting.

My questions are:

What design considerations will make the system more "future proof"?

What questions should be asked to the client/PM to make the system more "future proof"?

Data is king

I think it's a bit unreasonable to expect a Web application circa 2013 to be still up and runnable in 2053. Technologies are going to change. Platforms are going to come and go. HTML may be a quaint memory by then. But your data will still be around.

So data is your primary focus. As long as your data is still there, people will be able to adapt to whatever new technologies come about. Make sure your data schemes are well thought out and well suitable for expansion. Take your time spec'ing them out.

Regarding the actual applications, your company is probably correct here in having a 'build from scratch' directive. I maintain a couple 10+ year old Web apps, and I'm very glad they are not locked into the prevailing CMS systems of 2003. They use home-grown, very simple frameworks. I think for something like this you are better off with a very basic framework that you create specifically for the needs of the project.

But the reality is, over 40 years, the company will (hopefully) be making quite a few front end and back end services to adapt to evolving platforms. So given that, I'd target a 5-10 year lifetime for individual user-facing applications.

Look backward to move forward

Rather than trying to figure out how this application is going to still be in operation 20 years from now, I think you should spend your six months fixing the problems you found that the original architect caused, put in place a sensible and robust architecture, and move forward from there.

Partial de-normalization of a database is not necessarily entirely unexpected in a medical setting. Some parts of medical databases have characteristics which make them a good fit for the Entity/Attribute/Value (EAV) model.

Log everything

We produce software that has been in use by paying customers for over 20 years. The codebase has outlasted several generations of source control tools. Our software hits all your bullet points except for the tablet thing.

Some of the concerns include ESIGN and UETA. Our lawyers believe that we need to keep electronic records readable for a minimum of 10 years. For documents that are retained whole, you should look into PDF/A.

For your database, don't worry too much about normalization. Instead you should be concerned about logging everything and having audit tables that track changes/deletes in data. When upgrading versions, plan on testing new versions in parallel for enough time to ensure that you've got your data migrated. This testing of new versions also includes new operating systems—we've had some very unpleasant surprises over the years. Preserve installation media and license keys in the event that a rollback needs doing. Test backups. If you are going to serialize objects to store in the database, do so as XML instead of the serialization supplied by your development framework.

For your staffing, long term code bases need long term memory. Ideally you'd want people around who have been around a long time. If that is institutionally impossible, then you need to document everything in something like a wiki. And my advice is a wiki that can tie in with your bug tracking system.

For your codebase, make sure you have comments in your code. Migrating from one version control system to another will almost always lose your check-in comments. I'm a fan of naming unit tests after spec and bug numbers. That way if the unit test Test_Bug_1235 breaks, then you know what and where to track down what it is supposed to be testing. It isn't as "sexy" as naming your tests Check_File_Save_Networked_Drives but that sort of test is hard to backtrack to specifications, requirements or bugs unlike Test_requirement_54321_case_2.

I agree with GrandmasterB that "Data is king". Whatever trouble may fall onto the database and its users, if the data is around and in good shape, they will be able to cope. Having said that, you don't design for future extensibility by adding lots of fields or tables "just in case". Instead, you ensure that the data you expect to be stored will be stored accurately (normalization is one key techniques, as you noticed) and that there is enough of one-many mappings that tables relating to any objects stored in the database can be easily added in the future, if required. In particular, you do not assume that anything that is one-to-one at this moment, will continue to be in the future (people change addresses, names and even gender etc.)

As for the actual front end, make it as simple and targeted for the task, as you can. However comes after you, will need to read and understand the application code first and that is normally more difficult task than adding minor functionality that will be required. Try and do your best to make the structure of the application easy to understand, this will make future maintenance simple.

Design and build your own Operating Systems and Hardware? I'm trying to think how IBMs been able to pull off this feat with nothing but what seems like eggshells now. I guess if you can slow the pace of evolution you might just be able to pull it off. But in all seriousness, it might just be difficult with web architecture where requirements are constantly changing.

Feel free to ignore my comment, since I have never done programming. Like everyone else has said, short of running your code in a virtual machine and magic, you can't plan for 40 years of use. I'm thinking the guy said this thinking, OOOOH! If I tell him I need the code to work 40 years from now, he'll do a extra good job. The most you can do is be as explicit as possible in your commenting. If it is running 40 years from now, even if no one can understand your reasoning for X, at least they understand what the purpose of X was meant to be. At the very least, they can take your comments as a guide on what they need to do in order to port your code.

Really, you can't expect a web application to last 40+ years. There was hardly the concept of "web application" merely 15 years ago. Many things may change, including browsers - which is what you use for web applications.

The current web is flawed in many ways, starting from the DOM, and many theorists want to change it. And will probably succeed in the next 40 years.

So yeah, data is king. As an additional advice, I'd use open source databases, so there could be a chance to retrieve the data when the project is discontinued.

Everything else will have to be updated to the new standards when time is due, there's no way it can be avoided.

40 years? There's no way you'll get away with writing something now and simply doing maintenance for 4 decades. 40 years ago there wasn't even a web and since that point the speed of change hasn't really reduced.You will have to rewrite anything users access several times over that time frame. So make sure that you plan for this to happen and try to make it as easy as reasonable.

As has been said the probably most important thing is caring about data. That's maybe the most stable thing and also what this project is seemingly centered around more than most other projects.And concerning that, I really do see a problem with some kinds of denormalization. It might make some things easier and be good for performance, but by experience it tends to threaten consistency. In any case the data model should be very well thought out.

Another thing is that for such a long time you also need to think about lots of other things you normally can assume to be stable for at least another couple of years. Using anything that is outside properly specified and everywhere implemented SQL has to be a no-no. Any business logic should not just be in the code, but otherwise well documented because your language of choice might not make it up to 2054. and make sure that documentation at least does. Plain text or maybe PDF/A might work, docx will most definitely not.Almost nothing that seems to have existed since the beginning of time has been around 40 years ago when it comes to computers. Afaik the the first published ASCII standard was in the early to mid 60s. That is just 50 years. C should be just over the 40 year mark.

All in all and from the sound of it, this seems like whoever initiated this project might not be aware of how Herculean a task it is to create and keep running a piece of software for roughly twice the time it took from the first Pentium up until today.

My company is running various custom vertically-integrated apps that have kept going for decades. The data is still on some mainframe somewhere in COBOL (outsourced and virtualized), and the apps that interact with it are updated occassionally, including some web apps.

If you use something standard enough for the data engine, you should be able to keep it running forever, despite replacing the user-facing portion of the system from time to time.

"Allaun"]Feel free to ignore my comment, since I have never done programming. Like everyone else has said, short of running your code in a virtual machine and magic, you can't plan for 40 years of use. I'm thinking the guy said this thinking, OOOOH! If I tell him I need the code to work 40 years from now, he'll do a extra good job. The most you can do is be as explicit as possible in your commenting. If it is running 40 years from now, even if no one can understand your reasoning for X, at least they understand what the purpose of X was meant to be. At the very least, they can take your comments as a guide on what they need to do in order to port your code.

Using a virtual machine won't really change anything. Chances are that the package won't be supported anymore in 2054, and no machine could run it.You really can't focus on the front end, nobody have a crystal ball to predict what could be run in 40 years by now.

I'll echo the "data is king and comments are queen". The first requirement: "to capture data with unknown attributes using user-generated forms by health care providers" is insane.

Part of data integrity is knowing what attributes you have and what entities. It sounds very much like they want to let the health care providers come up with new attributes on the fly. That's a disaster waiting to happen, because it means there will be no integrity because there will be no consistency.

As for the 40 year lifetime of the application, are you kidding me? That would be the equivalent of still using an application written in 1974...

Which, as it happens, is about the same year the mathematical underpinnings of relational database theory were first implemented!

And to re-architect this magical miracle application you have only 6 months with you and one junior programmer?

Let me just say...may God have mercy upon your soul.

However, if you've truly pissed off the gods of fate to such an epic degree, I'll give you some advice.

1. Document the reasons behind EVERYTHING. The wiki suggested by another poster is a great idea.

2. Keep the code heavily commented, and the comments up to date, and make sure you concentrate on the WHY not the HOW.

3. KISS, and LCD. Especially in your database design. Choose an engine that gives you declarative referential integrity rather than using triggers. Other than that, be fanatical about sticking to the basics. Concentrate on 3NF, fight the desire to denormalize except perhaps for history tables (and even then, only when reporting needs demand split-second speed and/or the user-load on those reports is in the thousands per hour).

4. Do everything you possibly can to be absolutely clear. Naming conventions should be unambiguous, and NOT require a damn glossary. Brevity is not your friend, although obscene levels of verbosity will kill you too. 40 years from now some poor schlub of a developer is going to have to understand this thing.

5. Avoid the single lookup table for multiple attributes, no matter how well it may seem to fit the "unknown new attribute" requirement. That way lies the pain of bottlenecks, especially in updating.

6. Keep EVERYTHING, do not allow deletions, and be prepared to demand state of the art storage, and loads of it. Tell them they're going to pay for backup, or they will lose their jobs WHEN the data vanishes. Don't trust the cloud either, because it's not reliable enough. Maybe in 20 years, but that's a sucker's bet.

I'm a bit surprised how many people here on the forums are unaware of projects (not apps) living for decades. Stuff that runs on (what once was called) mainframes is often designed for decades. Especially in heavily (government) regulated areas like healthcare.

That doesn't mean all of the project will live that long, some will be ripped out and replaced in a few years, others will be migrated to newer systems. But, don't neglect anything on purpose, you'll be surprised what survives several migrations.

So I emphasize the same as most others, structure the project accordingly, take real good care of the data structure and document everything. If it's really a healthcare project, I bet it will be audited at some point and probably has to conform to some government or ISO standard. So prepare for that as well.

I'm a bit surprised how many people here on the forums are unaware of projects (not apps) living for decades. Stuff that runs on (what once was called) mainframes is often designed for decades. Especially in heavily (government) regulated areas like healthcare.

That doesn't mean all of the project will live that long, some will be ripped out and replaced in a few years, others will be migrated to newer systems. But, don't neglect anything on purpose, you'll be surprised what survives several migrations.

So I emphasize the same as most others, structure the project accordingly, take real good care of the data structure and document everything. If it's really a healthcare project, I bet it will be audited at some point and probably has to conform to some government or ISO standard. So prepare for that as well.

Society is a project that has been going on for a long time, but how it's done has change a lot in the past 40+ years.

The software, hardware, designs, problem domain, and all the workers will have completely different in 40 years from now, probably several times over. The person is asking about how to handle the implementation such that his specific implementation will last 40+ years. That won't happen, they will change it several times.

There's an interesting bathtub curve here... floppies and fortran are still around. But the general feeling (which I'd agree with) is that the Web quite possibly won't be.

I'd have a punt at C (and probably x86 architecture) lasting 40 years, but not much more than that. Agree with ChrisSSD and Grandmaster B: it won't happen, the best you can do is make it easy to migrate, which means making (the data structure in particular) well documented and not dependent on the intricacies of a particular platform, which will be lost.

I guess it's because I'm a graphic designer and have no idea about such kind of projects, but for some reason "I will (re-)architect", "40 year lifespan" and "thinking about Drupal" made me pull a "Nope. Nope. HELL NO." face.

Healthcare, 40 years lifespan, 6 months with 2 developers (one a junior, the other a 'senior' who considers Drupal for it) sounds a lot like a "You're going to have a bad time." meme.

I would concentrate on the 3NF data model and use standard SQL for the database queries as much as possible. Try to keep the database loosely coupled to the app because the front end will be rewritten several times in 40 years. There is no practical way for you to guess what will be needed in 40 years.

One problem not mentioned is a language feature (SQL, whatever) you implement now may be deprecated 15 or 20 years from now. The only answer for this is to document your model and application.

Also, start looking for another job, whoever came up with the 40 year requirement has no clue.

I worked on a health care application that has been going since the early 80's, and will be nearly 40 years old soon. The key? Your application must be the Ship of Theseus.

The data has migrated from a proprietary VMS database, to Sybase on SunOS, to Sybase on Solaris, to Sybase on Linux, and finally to MSSQL server.

The application had a similar migration path, from console, to x-windows, to Apache, then finally to IIS.

The data was completely separate from the application. Both were updated frequently and never left to stagnate. The schema was updated as needed, but only after very careful consideration, and always in a backwards compatible manner.

It may look completely different than it did back then, but it's still the same ship.

At my company we have systems that are over 10 years old... so I guess my advice should be somewhat useful but probably not complete:

* Use an open source core, such as PHP * Use something popular that will be maintained into the future, again PHP * Avoid third party dependencies - anything you did not write yourself will be difficult to maintain. Start from scratch as much as possible, sounds like you've been told to do this. * Keep your code as simple as possible. There is going to be massive scope creep and code rot, but these can be minimalized with simple code, and refactoring simple code is easier.

As for data integrity, I suggest a system where you only insert records into the database. Never update or delete records. It is OK to move records to another database however, to archive them and keep your main database small/fast. Basically instead of updating a row insert a new one and move the old one to another server (perhaps a few hours later?)

I guess it's because I'm a graphic designer and have no idea about such kind of projects, but for some reason "I will (re-)architect", "40 year lifespan" and "thinking about Drupal" made me pull a "Nope. Nope. HELL NO." face.

Healthcare, 40 years lifespan, 6 months with 2 developers (one a junior, the other a 'senior' who considers Drupal for it) sounds a lot like a "You're going to have a bad time." meme.

You're not wrong.

Although to be fair, if Drupal (including its database) were merely the front end, and the actual data and logic were encapsulated in a separate service - a solid db exposed by a lean, mean, awesome-API-wielding-machine - you wouldn't necessarily be doing anything wrong (ignoring anyone's feelings about Drupal). There's not necessarily anything wrong with your front end having its own database, as long as its role is well-defined, and it doesn't somehow become the final authority on any critical data.

In fact - it might actually be sane to leave the front end to some quick and messy framework(s), because it should be throw-away. And it will be and should be thrown away, several times. And you should be devoting the largest part of your time and budget to the part that can't be thrown away - the database, service and API (and tests, and backup procedures, etc) behind it all, which shouldn't have any coupling at all to the presentation layer, whether its Drupal, Joomla, Wordpress, Skynet, etc.

I'd even go so far as to suggest contracting out the front end build, as this will force you to have to really nut out a successful separation of layers. And if you're not sure what work would be left behind, after you give away the front end... then you should be the front end, and contract out the back end.

But yeah, the way it was phrased (and the fact that he chose Drupal), I kinda get the impression he was wanting to store all that critical data directly _in_ Drupal, so...

I agree with the data is king mantra. I'd also recommend trying not to be too specific with any implementation details; in particular with your data it's best to try to keep it as portable as possible, and to have your database code as de-coupled as possible, i.e - use vanilla SQL rather than SQL statements specific to the database software. Of course you can use them if you need to improve performance, but in such cases you should try to make it easy to switch back to vanilla SQL equivalents, so if the database software changes you don't suddenly have a broken system.

An audit table for database changes is actually a pretty good idea, especially if you can give it an adjustable granularity such that you can audit more during development and less during production. I much prefer a table of important events to trawling through log files, plus it lets you be more concise with your log files as well.

The actual code is tricky; I don't think anyone can safely say that a particular language will be around in 40 years+, though I expect most should still be supported in some form, there are abound to be changes that will make you (or a future developer) want to switch. With that in mind it's crucial that all code (in whatever language) is as clear, concise and well documented as possible, so that it's easier to port.

But yeah, get the data structures right, and write good code. I'm not sure there are really any special considerations beyond that

I'd approach this like any other project that is going to have a long lifespan. And they all have long lifespans. I've never asked for a project that will be thrown away few years down the line. You just have to remember that applications spend most of their development time in maintenance mode and you can constantly upgrade parts and add features to keep it relevant. By the year 40, every line of code that was originally there might have been replaced, but it's still the same application.

There is a million important best practices on how to lead successful software projects. The thing that most everyone forgets to do is this one: Documenting requirements is really important, even more so than code. Code can be(anb will be) replaced but you have to know what it is supposed to do. Otherwise you cannot do maintenance on it effectively. I've heard of banking applications that still run on Cobol. The bank certainly has the money to replace it but are too afraid to because they don't even know what it does.

Write comments early and often. No matter how hard you try, your successors are going to see the code you wrote as a complete trainwreck. It'll get butchered by contractors and have a bunch of buggy, contradictory modules tacked on like pus-dripping boils. And the whole mess will get rewritten a time or two by grossly underpaid grunts who don't know anything about anything but know this is their one opportunity to show they can do software development before moving on to greener pastures and taking whatever knowledge they learned with them (yes, I might be engaging in a little transference here...).

What's important above all else is that people who weren't there when it was written and may not even be fluent in the language are capable of understanding what it is that everything does. You also need to account for the fact that it may not always be a Web application--a lot of my legacy-software experience has been bringing executables that remember President Reagan kicking and screaming onto the internet, and once the 'cloud' fad wears off the code will flow the other way, or perhaps into some new medium that hasn't even been invented yet.

In your 6 months I'd get the database right and put enough of a front end on it to get the data in. If the system must allow deletes for data protection reasons then ensure the data gets shunted into a permanent log as XML or something like that. Encrypt it if you must so that the keys can be destroyed after a certain period of time. This is so that you can retrieve the data if something goes wrong and something is deleted accidentally. Separate the personally identifiable information from the rest of your data so that compliance with data protection laws is easier. Do the same for any comment fields (think about a logged and audited table for comments where each comment has it's own entry and can be retrieved for freedom of information requests more easily without having to retrieve every comment. Also makes disciplinary matters easier in the future too)

Try have a core table and a custom fields table for every object in the database.

Auditing of the data should be second only to the data itself, as good audit logs can save your bacon later when people accuse your code of randomly deleting or changing data. It also helps with data recovery if anything gets lost or deleted.

Once you have sign off on the database and rules around the database, build the middle layer that will serve the front end. This should be completely divorced from any particular front end, again you should have a bare bones front end that works, but doesn't do anything fancy. Finally think about your front end, as this really is the least important part of the system from a 40 year life span point of view. If you want to cache data in Drupal then go ahead, though I'd recommend against it, as while quicker will complicate your life later.

I work at a software product company dealing with records management for Microsoft products - So when it comes to long term, and how to get data into/out of our system, I know a fair bit.

-GrandmasterB - Data is King

100% correct. If you document your code properly, your data structures properly (as you should with software), you can take your DB and plug ANY new system into it. I'd say a KVP system would be best, as it allows more flexibility, but that's just my take.

-Robert Harvey - Look Backward

So true. You need to understand how other projects in this space have failed, and lessons learnt. Obviously, these days, issues that cropped up 10 years after implementation may appear after 5 years, so take that into account.

-Tangurena - Log Everything

Being in recordkeeping software myself, this is a requirement, so I laud the title. But, also, it is true. Being medically based, you would likely have a legal requirement to *not* lose any data. In fact, being medical, it's not just mission critical, it could kill. Refer to the first commentor. Data is king. Access to data is the next step.

Ignore the comments about lawyers *thinking* they need data readable for 10 years electronically. That's just a developer being a developer. As long as data is alive, especially in medicine, it needs to be readable ASAP. Once it is a record (and if this is America, there are differences for me, but by and large the same), there is a very long period in which the data must still be available on request for legal purposes, though it does not need to be available in minutes.

Best bet:

Data: KVP. You get to be *very* flexible with thatAccessing data: Web services. Means that the front end can change easily. Different UI's (PC, tablet, etc) are good to goDocument: EVERYTHING. That's the main thing for long running softwareSecurity: Use OAuth or claims based. Perfect for web based systems. Also, claims aware web apps have users that are KVP defined. It's kind cool if you like that kinda thing (I know I do!). Can extend the system as necessary.

Final thoughts: 40 year lifecyle? Really? Are they using paper right now? If so - I doubt - Ask if they have used the same filing system for 40 years. I doubt.

documentation, documentation and more documentation... you make it easy for the next person to fix the website...

any competent programmer should be able rewrite that application if you tell him exactly what needs to be there... yes it takes three times as long to write the documentation, and the effect is that the original programmer is not needed (so no ingenuous built-in job security)

there is a limited number of programmers, the less documentation a program has, the higher up the scale of programmers' needed skill, which getts to the point where the programmer is no longer a programmer but a debugger. thus, it will take the time equivalent of "real soon now" to duplicate the program...so it behoves management to either write the documentation themselves or allow enough time so that the project can be documented properly... but because deadlines are so ridiculous, comes down ultimately to the manager do get it done.

TL;DR... management problem, the maintenance team is more important than the programming staff... if the a so-called web app needs to last a generation (40 years) it needs to be able to be maintained by a person 40 years younger... in short NEVER trust a prigrammer younger than 30...

As many have said, an app will not last even 5 years, but the data should last that long.

1. START WITH YOUR DATA: Databases have been in use for a long time, and are not going anywhere. Normalize the data, and move it over to a new database. There are "standard" databases in most fields; I'm sure there is a standard for patient records. I can not tell You whether You should go with Microsoft, MySql, or what. That's up to You. It is easy enough to move data back and forth.

2. DEFINE THE INTERFACES (COMMUNICATIONS PROTOCOLS) Internet Protocol has been around for decades, so it will be in use for decades more. IPv6 is not going anywhere. (Sometimes I wonder if IPv4 will still be here in 40 years, but that's another issue.) Come up with a standard way to submit new data, and to read existing data. I'm thinking HTTP (or HTTPS) because it's so ubiquitous today that someone will know how to support it in a few decades.

3. BUILD THE BACK-END AND THE FRONT-END. 8086 architecture has been around since the IBM PC came out in 1981. That was 33 years ago, so it should still be around a few decades into the future. Database servers are common, and "easy" to setup. Web browsers will still be able to do HTML in a few years. Even phones will be able to do IP and HTML for decades to come.

I ran into some such projects and planned to start one of my own. When I was inquiring about the issues, i found out that a bigger problem is mentality, rather then tech to do it. Most people think of web apps in terms of popular web sites which have to change rapidly to satisfy consumer demands, but this does not always hold true. SO, here is what i found out for an app 15 years old:

- choice of a language is critical. PHP, ruby and other similar are a HORRIBLE choice. you don't want to have a 100k-1M line probgram written in a language which evolves so rapidly and which breaks backward compatibility because you will have an extremely difficult time finding programmers for it and you will spend more time on getting the app up to date then anything else.

- use ISO/IEC standardized stuff. Language wise, that means C11 is now you best choice. For DB, I'd go with postgres and not mysql. Separate reads/writes. have everything modularized and separate (interfaces for db, db, UI, everything). It will make it much easier to upgrade if it becomes necessary than to be stuck with an intertwined mammoth app. Whatever you may say now, C was around for an extremely long amount of time and C89 apps still run just fine.

- forget popular CMSs like drupal, radiant and simialr. they are built for totally different purposes.

- stuff about documentation and comments has been mentioned so I won't repeat my self.

Finally, some apps really don't change that often and stay around in pretty much the same form for a LONG amount of time.

Agree that the data model is critical. But as well as an excellent and well documented ERD you need, up front, to have proper DFDs, flowcharting, lifecycle analysis, and to have thought through issues like Unicode support, date and time support.Don't even assume SQL will be around in 40 years time. (I think it probably will be but a lot of people are spending money on inventing replacements.)Ideally refactoring the application when standards change is then going to be easy because the programmers are going to be able to consult a proper specification and won't need to understand a now archaic language too well.Oh yes. As well as Unicode support, assume that your application will be written in a language which has numbers rather than ints, longs and so on. Portability. If you are always expecting a possible real number, your loose coupling will work reliably.

Finally...leave this memo for your successor. 6 months for a significant enterprise healthcare application specification, with 2 people working on it? Fantasy world.

Many other commenters have done a great job of summarizing the challenges involved but I think one of the pains in the neck will be supporting mobile/tablets, those will definitely change during 40 years likely in very unpredictable ways.