It's 2018 and Koala has been in use bringing web applications to millions and millions of people for over 4 years. Better still, somebody stumbled upon the secret of anti-gravity and produced a very simple engine capable of levitating and producing finely directed motions for huge masses with very little energy consumption. The concepts or the LLOMA ( Light Levitational Omni-Directional Mass Accelerator, pronounced loh-mah ) engine were immediately put to use by NASA to replace current solid fuel rocket technologies and no less than 83 manned missions to the moon have been accomplished, in support of 3 research colonies that have been built on the surface with subsurface laboratories. 3 manned missions have put astronauts on large asteroids half the distance to Mars and the number of unmanned robotic missions to the outer planets has risen sharply. The first levitational vehicles using the LLOMA engine appeared three years ago and though costly, have been a real hit with the public and sales have boomed.

Now while it may not be true that the average human being can't get out of bed and start their day without choking on their tooth-brush, it is true that operating a levitational vehicle requires skills that go beyond what surface transportation requires. Put 1500 LLOMA vehicles over a street for just 2 miles and about 80% of them will be destined to become lawn darts. What's a lawn dart? That's an aviation term for an aircraft that stalls, goes into a nose down spin, doesn't recover and becomes a statistic. The entrepeneurs behind bringing LLOMA into production knew this thankfully and invested in a GPS application supported through KOALA to auto-navigate LLOMA vehicles. Actually, the LLOMA engine quickly proved its worth by demonstrating that it could keep LLOMA vehicles levitated, even after a collision. That demonstration of inherent safety really paved the way for implementation. And although it's still too early to support navigation for LLOMA vehicles to every corner of the globe, significant inroads have been made and some large cities already have heavy LLOMA traffic flowing in streams above highways and city streets. Legislation governing implementation of skyways has been surprisingly quick.

The LLOMA application uses GPS, taking departure and destination information input by the user to define skyroutes and navigation. A network of servers controls vehicles in real time and in combination with sensors on the vehicles can form bumber to bumber traffic streams that train only inches apart. Landings at first, were a bit tricky as the system was incapable of determining clearance and this proved to be the only point in the system where manual authorization from the user was required.

OK, that's SciFi. But how about air traffic control for commercial airlines and aircraft flying IFR ( Instrument Flight Rules )? In today's world, the system is not autonomous. While aircraft have autopilot and computers can take over for landings, pilots still govern flight and respond to air traffic controllers for clearances. Can a computer web based computer application automatically manage air traffic? The system would definitely require features that allowed the server to make requests of the client which would send responses. Koala's role reversal feature comes to the rescue.

OK, enough fun. The example was meant to demonstrate why the client/server role reversal features of KOALA are desirable and necessary for full web application capability.

I may live long enough to see KOALA in action, but I don't think I'll live long enough to ride in a LLOMA vehicle. Shucks.

But hey, I might get the chance to play an interactive client to client game featuring the concept of LLOMA vehicles in my lifetime. That'd be enjoyable.

Server independent client to client interfacing is a neat trick that Koala will handle very nicely. The server's role in this type of application primarly governs initiation of the client/client connectivity and then bows out, remaining available for requested services only if necessary. The client resident portions of the application take over complete control and requests and responses are a dialogue client to client. The server doesn't even participate as a queuing mechanism. It's totally out of the picture, unless requested to provide a service.

Impossible? Not at all.

Is it possible to go client to client without the server being involved, even for initialization of the connection?

Yes, possible, but those sort of connections would be very priveleged and limited to folks with direct access to the backbone.

Let's throw in this curve ball and see if someone can hit a home run ...

Is it concievable for a client to write and host an application client side?
Now that's a marvel. Where's the hosting server in this scenario? Well, let's say that is possible with client to client connectivity, but ... these would be very small applications, novelty scale ... something like what people did with yesterday's technology off server hosts.

I'm sure that if some thought is put toward it, several nifty examples of such applications can be imagined, scoped and specified.

Once upon a time, programmers entered code in octal using three fingers and push button LEDs. They had to dump their code to punched tape and use push button LEDs to load their code after that point. Soon memory and terminal and disk technologies allowed coding in languages which were subsequently compiled. These languages, top-down procedural languages were the earliest flirtations with algorithmic design and data structures. And when object oriented programming came on the scene, it was an accepted paradigm.

And then came HTML and the World Wide Web. HTML, a markup language, was at first static and had no capacity for debugging code. The limitations of HTML as a display language for exchanging information on the web were soon overcome with web languages like javascript, vbscript and later, PHP, pearl and eventually, the java languages. These have bloomed in a number of frameworks and interface vehicles that have become so prodigious and daughting in variety and extent as to make even a seasoned web guru scratch their head and ask "What the heck is that and what's it do?"

To be sure, web application development faces some enormous challenges. The tools have become too complex and the development of full fledged applications will suffer in a number of ways, not the least of which are design and SQA. The KISS ( Keep It Simple Stupid! ) principle has been thrown out the window!

And now on this day, the web community is poised to bring applications to the fore, already making inroads, but in a storm of diverse directions. And I myself, love diversity. I have an openly darwinistic philosophy that diversity enables progress, because where one thing fails, something else will emerge from the diversity and be successful. Yet I know a jungle for what it is, a wild and unruly, unordered place, prone to desease and malady. Is it any wonder that the malware hacker has such freedoms on the modern web? It's time to pave the paths and make a road.

Order comes from chaos, so the theory goes. And the butterfly effect implies that small events trigger large and sweeping changes. And so, my thread on Koala, you might consider to be a butterfly. And have no doubt that I intend it to bring order out of chaos.

Alright, let's take a look at some true internet applications as they exist today.

Adobe's acrobat reader comes to mind as what would appear to be a true web application that functions similarly to one of Koala's scenarios. Realplayer and Quicktime for videos are two other web applications. These applications require downloads of code which is OS version dependent and run off the operating system. Video players use the <embed></embed> tags to allow web programmers to make direct use of them. And there are other applications that seem to be classifiable as web applications. Microsoft's Windows Update routines qualify for this discussion. But when you consider Symantec's liveupdate, what we are really looking at is not a web application, but an internet application, that opens up its own port and does not use the browser in any way. Acrobat reader, video players and even Windows Update at least appear to have ties to the browser. Just for fun, consider MS and Yahoo Messengers. Do you think they are web apps or internet apps?

So what's Koala offer that's different, if this sort of thing can be done already?

Well, if you're an web application developer, try writing an appication that is portable and can be downloaded and run and then disposed of after the user terminates the session. To accomplish this sort of application development, you'd have to depend upon built in browser functionality, library modules added to the server or some very clever and deep understanding of how to open up an internet port through sockets and do your thing.

The Koala VM OS layer is going to do a couple things. Browser design will be lightened. A lot of the current built-ins will come out of the browser and get incorporated into the VM OS layer which as we stated in a previous post would be better implemented separate from the browser, but accessable from the browser. It could in fact function off line to do a whole range of services not web related. The VM OS Layer would be a translation layer for accessing the OS, a filter and a container for web connections, particularly for those client role reversals, enabling the client browser to take requests from the web and respond HTTP fashion. Applications that appear to be of the ilk that have been mentioned in this post need not even be downloaded, but remain virtual, loaded into memory only. The VM OS provides APIs to do all those wonderful things that current technology diversity allows us, if we have the time to read and learn it all before we get old and grey.

The big thing Koala does through that VM OS layer is to standardize much of that technology, condensing it down to simple interfaces that web app developers can quickly relate to and put to use. It will also allow plug-in Mods just like the Apache and other servers do, so it is extensible in an orderly way which further supports portability of web application code. We begin to trim back the jungle and create a formal garden.

The application developer now relates to an API and not a plenum of diverse technologies.

Oh and is it clear how the separateness of the VM OS layer makes it ideal as a development, debugging and SQA tool? That functionality would be built in.

But hey, hold on a minute ... isn't most of that huge techno-quagmire on the server side?

Yes.

And so, that being the case, how's a Koala browser with a built in VM OS going help condense that jig-saw puzzle into a sane and structured order out of chaos? How's that jungle going to be made into a formal garden? Koala doesn't own up does it?

Well hold on a minute and let's take a look. Why is all that quagmire over there on the server in the first place? There's a history to this and it deserves a look. The client/server architecture on the web was based on clients making requests and getting responses which painted the page. When dynamic pages came on the scene, this required server technology to be ever more the workhorse, albeit browser began to incorporate logic to support dynamic pages through a diverse number of technologies. And even with Koala, the server will still do a lot of the grunt work, especially where databases and connection of the client to the web are concerned. What is not clear, is how the current architecture forces us to use the server to do some things that an alternate architecture would allow on the client side, specifically, application functionality that could be incorporated into the controller and view parts of client side MVC frameworks. To implement dynamic web pages, the client side of the coin was able to do only so much. Most changes, including those done by the now beloved AJAX techniques require requests to the server. We do a lot of things today to change the face, the look and feel, of web pages, that really don't need data services from the server ... only repainting of the page. We even go to the server to have it perform algorithmic logic that can be performed client side. It will be possible to do these things for applications on the client side and in a standardized way.

And let's not forget, now we are talking about a client that can do role reversals, servers that can do role reversals. Yes, Koala can help to cultivate the server side jungle. It's just not clear what the server side would give up to the client side in the Koala architecture. But with a little analysis, those things will become obvious.

It doesn't take a rock scientist to understand that with today's web architecture, the server is literally housing the applications and serving up every change that a web application requires. Does it not make sense to move the applications to the client side and only provide them services from the server? Servers will continue to serve up applications, but once they are client side, the server doesn't need to get involved in application architecture and UI changes. The App does it all client side.

Here's an interesting client/server dialogue that becomes possible in Koala due to the ability of both server and client to reverse roles - polling.

Imagine an application that is set up to include a state machine. In this scenario, Koala makes it possible for the server to query clients for state information regarding the appliations usage history and its current context. Today, such an application would have to generate requests to the server to convey that information and the server would have no way of determining one state in particular - disconnect.

Polling is an old methodology used in netowrking and the utility of such dialogues between client and server are diverse. Control Systems are but one of many areas where web polling will shine. So Koala would go beyond allowing simple status checks from the server side or simple handshaking based on processes that would run and come back after a delay and announce their completion.

This sort of dialogue between client and server, server to server and client to client is just about unheard of using today's web application architecture.

Here's one more rather simple Koala interface option, a client to client HTTP communication between two clients for the same user on the same host. Using today's technology, the user would open a new window, either a dialogue or a new browser window or tab and pass it a query string or form variables in the body of the request, round trip through the server.

With Koala, the user can do this all on the client side without the server being involved. Certainly, this will work off-line as well.

The ability for the client to do a role reversal likely has utility I haven't even concieved of, but having two intercommunicative browsers open, or even frames for that matter, without any reliance on the server, surely has many application uses.

Does this suggest a capacity to batch procedures through a browser? Yes, it does.

Consider for example, a web application that uses another window or tab to actively display a log of activity on the main browser window. This is a simple application that may have many uses in legal contexts or ... perhaps a context where the user may require a 'playback' of their activity. A specialized log of activity available to the user would provide such playback capbility with breaks perhaps, to alter the procedure from a certain point, fork or branch, etc.

Tabbed browsers are fine, but for a web surfer to be able to readily switch between alternate contexts of an application ... or even to other applications is a highly desirable feature. That's not to say that there aren't ways to do context switching with today's browsers. There are. Bookmarks and Back and Forward reloading of previously viewed pages cleint side are in effect low level forms of context switching. But we're talking something more here with Koala. Context switching is more robust, allowing rapid access to application contexts, not just browser contexts.

And with better event handling, scheduling becomes a dream come true. The browser user can set up web calenders to pop-up alerts for scheduled events. It can start sessions or processes at specific times and have commencement conditional upon completion of some other session or process. Again, the role of batch applications over the net come to prominance. And with reliable scheduling, the user can be out soaking their toes in the sand at the beach.

Quite naturally, logs of process or session executions become important features and playbacks, rarely heard of, even for operating systems will shine for the ability to roll back and redo processes that either failed or were being evaluated and tested.

And please note that when we talk about scheduling, these are not administrator chores, but common setups that the user is empowered to do.

Social networking, a sure-fire incubator for groupware, is all the rage these days. On the web, we have evolved chat into messengers, mailing lists into forums, images into photo galleries and video expose's. And on the development side, opensource software pulls in contributors to various projects from all over the world. One type of application for social netoworking on the web, online gaming, provides a very fertile ground for discussing analogues for groupware and without doubt, gaming developers will likely provide some deep insights into what must be done to bring features to the fore for other groupware applications. Like other intelligent creatures, we learn by playing. And within the context of social networking, applications that may be described as groupware are at the pinacle of this play.

In the online gaming sector which for fun and entertainment, utilizes some of the current web's most sosphisticated features to provide an interactive gaming experience, groupware will be the holy grail, bringing eden to the web. Certainly, some games on the web are solitary. These are no big deal. But those that are pioneering multi-client group participation are currently limited by what web technology will do for them. And Koala will remedy that.

The big trick with groupware is sharing. Sharing most particularly, hovers around specific information and processes and at the crux of this sharing is a very common data structure, the queue. Now some of you might say no, it's a database or its the server. But still, the database uses a queue to sequence operations, reads, writes, whatever. And the same is true of the server. There may be several queues involved and these in turn may be enveloped within other structures and processes, to include for example, security shields, in particular.

In present scenarios, databases provide their own security as do servers. But that's the present scenario. Groupware however is limited when the sharing mechanisms are limited to the database and server. In Koala, client to client communications allow a whole different context of sharing, one that largely bypasses both the server and the database. And that context must accomodate a definition of a secure group where sharing is possible with a mechanism of select privileges and exclusion of non-registered users. Yet client to cleint connectivity is obscure. In a client/server architecture the server is always a player and it knows about which clients are registered users, while also being capable of public privileges. In the current context, a server database defines the gamespace ( or workspace ) and users share the gamespace in a way that may be supplemented by messaging each other regarding their inputs to the game. In a client/client context, clients are not necessarily directly privy to who is and who is not a registered user. They have to be informed before a shared connectivity can be set up with privileges in place. Further, the gamespace or workspace is not centered at the server. Part of this space may reside on the server, or it may be completely client-side.

Certainly, a client can query the server to acknowledge another cleint's registration authenticity and security status, but this involves extra work. An alternate methodology is token passing and exchange. When one client makes a request or responds to another client's request, tokens are exchanged which define authentication and security. These tokens must be mutable. That is, to defeat hackers who would quickly capitalize upon volume traffic that involves a fixed token structure, the tokens should automatically modify their form in a way that is known to registered clients. No hacker can ever fixate on the form of the token and run repeated scripted attempts to break in to the group. Session IDs are open to such attacks, especially, when the number of users on a site is enormous. The odds in favor of the hacker's success increase with user volume. Mutable tokens eliminate this volume advantage and make the hacker's success rate fall off to for all practical purposes, zero. The required exchange of tokens ensures that the hacker can not capture a one-way token and use this to attempt to barter access. The requesting client will always be required to present their token first.

How a token's form is changed is algorithmic and the frequency or scheduling of the change might be tunable. With very low user volumes, token mutation is required less frequently. Higher user volumes would require more frequent changes. When a groupware session between two clients overlaps the event of a token mutation, two methods may be used to update the clients, server polling of the clients or ... if the clients discover they are being denied inclusion, they can request a new token from the server. Keep in mind however, that one client has the right and the capacity to specifically deny another client. This is a special form of denial in which the denied client will have enough information to know that it does not need to update it's token.

Much still needs to be said about how Koala and its capacity for cleint/server role reversals impacts groupware and it is not beyond reason, that the traditional HTTP/HTML vehicle for information exchange may not only see alterations, but be supplemented by other vehicles and methods for exchange. But that's meat for another post.

It is inevitable, that eventually, a proposed technology such as Koala will turn the corner of revelation, exposing the crux of why the quagmire exists in the first place and why, Koala and other web technologies that are so large in scope that they require collaborative effort, may be impossible to implement fully, if at all.

Before getting back to Groupware and the certain interest it must have for application architects, let's take a look again at HTML and how it applies to full blown applications on the web. Currently, large scale applications are written in a language like C/C++, reside on client-side diskspace and if they need services from the internet, do not use the web, but open a port to access those services. This of course, means that those applications are not portable between platforms and are at least, OS versioned. Further, there is no connection to the web interface. Users have very little in the way of interacting with such applications as part of the web experience.

In large server farms and other contexts, hypervisors serve to eliminate the portability issue, but at a cost. They hypervisor is an intermediate layer which acts as an application on the OS that is a container to translate OS and other system calls for other applications. Historically, the general public has favored the MS Windows platform for its application availability. Windows emulators can be run on Linux and Unix platforms to make that application wealth available on those platforms. Processor speeds have increased and having such an intermediary level, even as a byte code interpretor for API languages is much more feasible today than it has been in the past. Yet the common API for such applications has yet to evolve as proposed here, for Koala.

The political and practical crux of implementing the API will be a tricky one, indeed. The MS windows historical prominance as the base of most generally public applications might have some saying "Model that API after the Win32 API" while other will tout ( as I do ) the straight forward nature of the UNIX System API as the preferred model. If the historical precedent were not there with an established base for Windows Applications, the Unix API would definitely be more clear cut and preferred. Remember, that the VM OS layer for Koala is a translator ... so the API as it appears to application programmers is independent of the native OS. A third approach would be to have the API be something entirely different than either Windows Win32, Unix, Apple or any other API. It would seem to be an impossibility to have the API cater to compatibility with more than one OS style. In any case, this is a huge hurdle for the collaborative implementation of Koala to overcome. If I were to say outright that the API should be modelled after the Unix API, the MS folks would balk. Yet I have looked at WIN32 and ... well, give me the UNIX API. The problem is going to be a tough sell, but it needs to be settled or the portability issue will remain. It will just be moved from the OS to the browser. As can be imagined, going with the Unix API will make all those Windows applications suddenly available to Unix/Linux systems, not just the browsers. This could be the beginning of a great exodus from an OS that has historically seen a morass of security problems and other issues.

I wish I knew that answer to how diplomacy can resolve this issue. The dream of having a straight forward simple API to develope full featured web applications on top of hinges on how the Koala API will look. And even then, a large number of applications may be written to run directly on the native OS, using the browser's VM OS layer only for web services.

There. I've said it. The great technological double bind for application developers is on the table.

Actually, after looking over all of what I've posted in this thread, I think a white paper might be a good idea and fun thing to do with this concept.

Originally, I just wanted to say a few words about a simple web application wish list ...

1) A simple API that in my mind would be similar to the Unix Programming API but expanded to include web features and allow for more robust web applications than are currently available and in some cases, possible.

3) More comprehensive and effective security for the web and internet.

Well, instead of stopping there, I did a brain dump. And yes, there was a lot to dump and I still haven't dumped it all ...

Yeah, I think a white paper would be fun. I've never written one before. It wouldn't be such as to market a corporate product, touting Koala's virtues to generate revenue. Nah. Nor would it be to support an organization's cause, since there is no organization to overcome the hurdles Koala would. I would likely wind up hosting it on my own web site.

I figure 3 sections ...

1) Executive ... explaining the basic overview and general tenets.

2) Managerial ... getting deaper to explain how Koala apps might serve organizational purposes and be marketable for products written for it.

3) Engineering ... the meat of the matter, perhaps even to greater detail than the brain dump presented here.

I like to bandy about a couple words with my own bastardized definitions, but perhaps not so bastardized. When you have certain behaviors to describe and there is no neat word that describes them, you quite naturally borrow from something that is closely analogous even though it is used for some other definitive purpose. Etymologically, that is how language grows.

The following terms might well be applied to information and control theory ... and artificial intelligence.

Q: What is a tautology?

A: The dictionary says that this is a needless repetition. But hey ... OK ... is brushing your teeth every morning needless? The bastardized definition might include such activity or behavior, right down to breathing. But in the BearState parlance, tautology is related to the act of learning. The common analogue is putting out your finger toward a hot stove, getting burned and learning not to do that again. It's repetitive probing and it is a very common method of learning. In fact, experimentation using the scientific method is tautological. And so, we see that the dictionary requires some revision. Tautologies are NOT needless repetitions. They are useful methods to learn by proof and we all do it to some degree. Some of us use recipes based on what other people have written about their tautological proofs while others use tautology more in the process of learning. These folks are likely much more analytical in their thinking. Tautological learning behavior comes into play even when a person projects or predicts behavior through theory and then seeks to prove the theory.

Q: What is tautomerism?

A: The dictionary defines tautomerism as a relationship between chemical isomers which are capable of changing form, particularly, exchanging each others form. Evidently, this is a repetitive action. The bastardized BearState definition is more general. It relates to repeated measurements of some entity in a learning or experimental context. In particular, I have used the word in the process of talking about 'chunnelling' such that routers exercise some AI to learn from repeated exchanges of traffic, such that they learn best routes. I have suggested that the end points of a route ... the application nodes participate by carrying the route information with the transmitted packets and that this might help to use software to create the equivalent of static routes.

I hope that helps for those who have stumbled over my usage of these terms and the context of their usage was not clear enough to illuminate the definition intended.

NOTE: It is entirely possible that I am not the first to bastardize these words to form precisely the same usage, so I claim no credit for being the founder of these definitions. I would suspect that you'll find these words used precisely in this way in the behavioral sciences and perhaps even in AI and Information and Control Theory. For example, in behavioral learning, once a tautological learning pattern becomes routine, it's definition becomes rote. Before one can ride a bicycle, they must try it with a degree of repeitition that we can describe as a tautology.

Let's have a little more fun with Koala, but for the sake of efficiency.

When we talk about repetition in anything, we can visualize methods of re-use. This is done quite often in the current genre of browsers and servers and I don't think I need to go into specific detailed examples. If one puts their mind to it, the examples become obvious.

No, instead, let's turn back to the languages a Koala Web Application Developer will have at their disposal. There would be a trend toward C/C++ or more traditional high level languages to generate the applications that will run on the VM OS layer using more standardized APIs. But let's toss something else out for reflection ... byte code packets. If you've got an emulator that runs byte code for portability magic, why send HTML which needs to be parsed and compiled in situ on the client side? It only makes sense to have re-usable byte code packets client side as part of the APP ... or pre-compiled packets. We've already said that with the VM OS layer it's possible to use the OS and all of its load libraries ... but for quick and dirty portable web app code ... ready mix byte code packets would be a charm.

The packets BTW can be envisioned as having a signature such that for security reasons, packets not having that signature would not be allowed to run. I'll keep from presenting what I envision that signature's content might be for now.

Coding for security on the web is one of those highly visible careers which can make or break a developer. Rightfully, programmers working on security projects may often be subject to quite a bit of personal scrutiny, especially with money or other assets are involved.

And working security solutions can be complex. Programming for security demands continuous dynamic adaptation. One might consider all possible avenues that security may be broken, but still miss identifying less than one percent of the issues and that will be enough to allow security to be broken. And the programmer must always assume that they have indeed missed something, whether it be identifying a threat to security or providing a valid solution to eliminate that threat.

With security, we could pose a couple theorems. And these have likely already been stated somewhere, so I won't call them Brian L. Donat's Security Theorems. If nobody's come up with these theorems yet, well, OK ... call them Brian L. Donat's Web Application Security Theorems.

Theorem #1: The more complex a web application is, the more vulnerable it is to security breaches.

Theorem #2: No Web Applicaton can be totally immune to security Breaches. If somebody wants in bad enough, they will find a way. And they don't need to do it over the internet.

Theorem #3: Time increases the risk that security in a web application may be compromised.

The most total and certain form of security is complete isolation.

In fact, isolation is the goal with security. However, for a web application to be useful and achieve a degree of practicality, it can not be totally isolated. Instead we impose limited isolation through restrictions.

The simplest forms of security are based upon this concept and work together to enforce limited isolation ... or limited access, however you choose to term it.

Access Lists
Permissions

Going beyond this, we find that most of a what an engineer must code for when evolving a secure web application environment, must compensate for those three theorems which may be reworded as follows ...

Given enough time, security of web applications may ultimately be breached due to holes being discovered in the wall that would normally provide the limited isolation for the application's practical usage. Penultimately, an application's security may be breached via means that go beyond the web application itself and even the internet and the world wide web.

We can rationalize then that web application security involves the practical use of isolation mechanisms and eliminating the holes in the application which may be used to compromise that isolation and in particular, to dynamically impose change upon the isolation mechanism to reduce the risk of breaches. This should shed some light upon the motivation for dynamic token exchange as a means of securing communications between groupware clients which was defined in a previous post.

Here are some typical holes in the security wall ...

SQL Injections
HTML Injections
Scripted attempts to break passwords
Scripted attempts to utilize sessions
Scripted attempts to gain user ID and Password Information from registration code.
Phishing for Personal Information, most particularly, login IDs and Passwords
Detection of Application Sensitive Information by intentionally causing a web page to error.
Reading Code Pages on the web site.
Breaking into the web site via an alternate application running concurrently on the site, either in support of the server or as another available application through the same site.
Exposure of information in the URL of GET Requests to the server.

and so on ...

And now the scary part ... web application developers, even those whose job it is to code for security, do NOT understand all the possible holes. Holes are often discovered AFTER a breach has occurred. This does not however, preclude prediction or pre-identification of such holes ... based upon experience with prior or general applications.

And so, it is accepted that it's a complex task to ensure security and that it can never truly be one hundred percent effective. But the history of the web has demonstrated a pattern of abuse and certainly suggests a new approach.

With Koala, the security philosophy implements a shift to the following objectives ...

1) Eliminate the children who are hacking, such that the remaining hackers must be very good at what they do to be successful.
The client 'option' to have security must be eliminated with a large part of the security task moved to the route or stream. And that introduces yet another set of tasks for web application security ... monitoring, detection and elimination.

2) Through detection, endeavor to idenify and eliminate the more highly skilled hackers.
Methodologies necessarily go beyond the internet and the web and into law enforcement and the courts. Detection and identification should be come key elments in web security scenarios of the future and the implementation of this vehicle should not be a client option. Intrusion by law enforcement into offending systems to identify and track offenders should be explored once such intrusions by law enforcement are deemed justified by probable cause.