During software development the developer apparently has to undergo a certain process named the SDLC (system development life cycle) by using one out of several software process models. This is to establish “the order of the stages involved in software development and evolution and to establish the transition criteria for progressing from one stage to the next.” (Barry W. Boehm, A spiral model of software development and enhancement p.61)

The two common models are the waterfall model and spiral model. The waterfall model provides a “linear and sequential” development method; once the developer has passed a phase, like a waterfall, “you never go back” (http://www.iuk.be/ist205/sess8.html); there is no back tracking, reversing or revising previous stages.

On the contrary, the spiral model allows for a non-linear and more flexible development method. Phases can be revisited at any stage in the process and as many times as need be. The four phases in the spiral model are:

From simply scanning the four main phases in each model it is apparent that the spiral model is the more stable and reliable out of the two: there is evaluation and then risk analysis immediately after planning which means that if there is anything wrong or miscalculated in the initial planning then it will instantly be recognised and corrected before implementation; “important issues are discovered earlier” (http://www.answers.com/topic/spiral-model). Dissimilarly, in the waterfall model, not only is there no risk analysis, but there is only testing after implementation suggesting that if there is something wrong in the planning then it is not realised until it is too late. It is only “assume[d] that development can follow a step-by-step process” (www.horde.org/papers/oscon2001-case_study/4_waterfall.xml.html) when clearly this is not the case. The waterfall model is “risky and invites failure” as Dr. Winston W .Royce, the founder of this model, admitted himself.

The spiral model can be seen as a much more complex model; the diagrams juxtaposed are a mere demonstration:

[via appendix]

This gives reason to why the waterfall method is described as a “simple… disciplined approach” (Dr. D. Tishkovsky Software Development Tools, slide 4) however I don’t see how this model intends to “manage risks” (www.faqs.org/faqs/software-eng/part1/section-4.html) when baring in mind that phases can not be revisited and it is “impossible to capture everything in the initial requirement documentation.” (http://swiki.ucar.edu/sea-best-practices/15) Inaccuracies in one phase are “carried out in the next phase” (http://www.buzzle.com/editorials/1-5-2005-63768.asp) creating an accumulation of problems by the time implementation is applied. Surely this model is setting one up to fail.

Despite this, it is argued that it does produce “highly reliable system[s]…with large capability of growth” (Dr. D. Tishkovsky Software Development Tools, slide 4) Yet this is only possible if the developer has “complete knowledge of the problem and do[es] not experience change.” (http://swiki.ucar.edu/sea-best-practices/15) Ironically it is more often than not that “requirements will continually change.” (Previous website) Additionally, at the testing phase, if the software fails to meet the requirements then a “major redesign is required” (Dr Winston W. Royce Managing the development of large software systems p.329) yet this need is hindered as “design can’t be changed after the design phase” (http://www.ommadawn.dk/libellus/begreb.php?emneid=54) This explains why this model “does not work well” for several categories of software; it may be simplistic but this model is not realistic in the world of development.

At the opposite end of the spectrum, although it is said that the spiral method derived from the waterfall method, it was designed to “overcome the disadvantages of the waterfall model”(www.buzzle.com/editorials/1-13-2005-64082.asp) It can cope with the “inevitable changes” (http://www.answers.com/topic/spiral-model) of a development process; not only can phases be revisited as stated earlier but each phase begins with a “design goal and ends with the client…reviewing the progress” (http://www.answers.com/topic/spiral-model). This model undeniably concentrates on removing “errors and unattractive alternatives early” (Barry W. Boehm, A spiral model of software development and enhancement p.69) on. It’s “risk driven approaches avoids many” of waterfall’s difficulties. (Previous article P.69)

However, the constant client reviewing means that this model “only really works well on internal software development.” (Previous article p.70) Furthermore, its success being determined by the detailed risk assessment means that the success of using this model relies on “risk assessment expertise” (Previous article p.71), which is not always available.

Although the waterfall model “looks very good on paper” (http://www.ommadawn.dk/libellus/begreb.php?emneid=54) better than the spiral method, a product development process is not this simple. Its said that a waterfall model's success is determined by the “time spent early on” as there is no turning back. However when does one know that enough time has been spent? When is the cut-off period? In fact, it’s said that the closer a “particular plan...[is] followed” (http://www.iuk.be/ist205/sess8.html) the more successful the development, however in the case of such a strict, non iterative model I think that the closer the waterfall model is followed the more chance of failure!

What makes the spiral so robust is the fact that not only does it begin with a hypothesis that if at any time fails then "the spiral is terminated" (Barry W. Boehm, A spiral model of software development and enhancement, p.65) but also each cycle of the spiral begins with the aims identified, "alternative means of implementing...the product...[and] the constraints imposed". (Barry W. Boehm, A spiral model of software development and enhancement, p.65) Progression is constantly under scrutiny so there is little chance of mishaps being overlooked. Moreover, the early steps involve a first prototype that is made from the preliminary design; its "strengths, weaknesses, and risks" (http://searchvb.techtarget.com/sDefinition/0,,sid8_gci755347,00.html) are evaluated and a second prototype is constructed from this analysis and is tested. This prototype is also evaluated by the customer and the "preceeding steps are iterated until the customer is satisfied." (http://searchvb.techtarget.com/sDefinition/0,,sid8_gci755347,00.html)

The waterfall model may have provided the basis for a number of software development methods as it is the "first process model" (www.buzzle.com/editorials/1-5-2005-63768.asp) however the disadvantages discussed explain why this method "usually breaks down" (http://swiki.ucar.edu/sea-best-practices/15) It may allow for "departmentalization and managerial control" (http://searchvb.techtarget.com/sDefinition/0,,sid8_gci519580,00.html) along with providing neat schedules and dead lines due to it "flowing steadily downwards." (www.selectbs.com/glossary/what-is-the-waterfall-model.htm) but regardless how 'steadily' it flows; risk factors and problems are 'steadily' missed and "design changes are likely to be so disruptive." (Dr Winston W. Royce Managing the development of large software systems p.329) In fact it is argued that it “often yields projects that are horribly behind schedule” (http://www.cs.bsu.edu/homepages/metrics/cs639d/CS639WWW/chapter13/sld011.htm) not on schedule! I would choose the Spiral method over the "oldest system development method" (http://www.ommadawn.dk/libellus/begreb.php?emneid=54) in a flash.

Monday, 9 April 2007

I kept hearing the term “RSS” in my lectures yet I hadn’t a clue what this meant so when I realised that I could produce an entire learning log on it I thought this would be the perfect opportunity to erase all my queries and confusion.

I found two meanings of ‘RSS’ via http://www.rssunderground.com; “Really simply syndication” and “Rich site summary” both which ultimately mean the same thing apparently. I wasn’t sure what syndication meant. I was clearly behind in my I.T knowledge so before I read on I went on to www.goole.co.uk/search? and found that syndication is “the process by which a web site is able to share information.” Ok so now I discovered that RSS had something to do with sharing information over the web.

I returned to the first website I visited to read more on RSS. My findings of the definition of ‘syndication’ was confirmed as “RSS is about sharing data…a method of delivering updated dynamic content that changes.” (http://www.rssunderground.com) This rang some bells as I had recently been investigating content management systems and blogs which consist of ‘dynamic content that changes’ via users, hence why Mark Pilgrim (http://www.xml.com/pub/a/2002/12/18/dive-into-xml.html) states that RSS is “popular in the weblogging community.” Things were slowly piecing together and I realised that the learning log topics were not merely random and unconnected.

I still wasn’t sure what RSS exactly was and what it did. Ironically, whilst searching for a job via http://www.londonjobs.co.uk/cgi-bin/myjobsite.cgi I discovered that RSS is a way in which to receive “updated data from your favourite websites.” Instead of visiting a number of different websites to, for instance, check on updated news articles or updated blogs, they can be “sent to [you]…in one place rather than searching it out.” (http://websearch.about.com/od/rsssocialbookmarks/f/rss.htm)

I kept coming across the term RSS feed. Admittedly this term confused me so as always I went on to investigate what this term meant. I never directly found out what the singular term ‘feed’ meant however I did discover that one way to receive an ‘RSS feed’ is by downloading a “feed reader, or feed aggregator” (http://websearch.about.com/od/rsssocialbookmarks/f/rss.htm) Once this is done then one can effortlessly obtain “RSS information from websites that offer the service.” (http://www.londonjobs.co.uk/cgi-bin/myjobsite.cgi) I take it that ‘service’ refers to the ‘RSS feed’ service.

Gradually I understood the subject of RSS and its purpose. I had already subscribed to newsletters/information from websites to receive updates on things such as jobs and shows, hence why “syndication or aggregation [is] sometimes…just called subscribing.” (http://www.sixapart.com/about/feeds) I still had yet to find out what ‘feed readers’ actually were.

Feed readers are “RSS aware programs [also] called news aggregators” (http://www.xml.com/pub/a/2002/12/18/dive-into-xml.html) meaning that when there is a file in RSS configuration then the feed reader can “check the feed for changes” (above website) and react to them as appropriate. Some of these programs “parse the data and put it on your web page as HTML” (http://www.rssunderground.com) where parse means breaking down data input in to “smaller, more distinct chunks” (http://www.google.co.uk/search) for easier interpretation.

By now I had realised that a feed reader was simply a program that reads RSS feeds and then updates the information if there are any changes and then delivers or ‘feeds’ the user the new information.

There are a number of ways to receive feeds; via the web, such as “My Yahoo!... My MSN, or My AOL” (http://www.sixapart.com/about/feeds). Another way; via certain web browsers, such as Mozilla Firefox, Internet Explorer 7 or Safari (for Apple Macs), or via stand-alone programs that you can download, such as Straw, FeedDemon, SharpReader or NewGator.

Feed readers on the web or browser are good if a user does not wish to install any programs. The pros about web feed readers is that the “services are free” (http://www.sixapart.com/about/feeds), they can be accessed from any computer if, of course, the computer has internet access, they are “elegant, fast, simple, and easy to master.” (http://www.askdavetaylor.com/how_do_i_subscribe_to_an_rss_feed.html) Feed readers built in to the browser can only be accessed via the user’s computer where the browser lies however they are just as good as web readers. Stand-alone feed reader programs cleverly allow your feeds to be stored “even if you’re not connected to the internet.” (http://www.sixapart.com/about/feeds)

I wanted to explore these different feed readers myself and so I signed up for Google reader and down loaded Sharpreader. At first I was very unfamiliar as it looked a lot like an emailing program such as outlook express. I played about with them and finally subscribed to some feeds. I can agree with Eric J. Heels (http://www.erikjheels.com/2007-02-23-google-reader-vs-sharpreader.htmlwww.ahfx.net.weblog.php?article=100) that Sharpreader most certainly has a “simple interface” much like Google reader and both allow the user to “group subscriptions in to folders” (above website) However, I much preferred Sharpreader as the interface was not too ‘patronising’ as it didn’t use large, colourful and ‘friendly’ text/images. Sharpreader also seemed to have a lot more options and settings:

[via appendix]

Even if Google reader does have these options available, they clearly were not as easy to find as those on Sharpreader. Eric J. Heel also suggests that in Google reader “folders seemingly can not be renamed”. Despite my preference for Sharpreader I must admit that I did not experience this; I was able to change both the name of the files and the folders.

I was unable to access Internet Explorer 7 (IE7) so I simply researched this browser’s feed reader. I discovered that although IE7 allows the user to “filter by category…[has a] built in comments section…[has a] search feature …[and] the ability to sort by date, article name, and author” (www.ahfx.net.weblog.php?article=100) Google reader was in fact “miles ahead of it” (www.microsoft.com.windows/IE) and IE7 was to my surprise not as robust. Apparently, with IE7, finding out how to read the RSS feeds “is not as easy” (www.seebreezecomputers.com/tips/rss). Furthermore, from the same previous website, I uncovered that “the menu bar is missing” until the user enables it; the user must “right-click on a black space on the top toolbars” and tab Menu bar to turn it on; clearly not user friendly. On top of this, the “buttons for Search, Favourite, and History to open up in a side panel” (www.seebreezecomputers.com/tips/rss) are absent too. Another con is that you have to download a large browser that asks several technical questions that the user is more than likely none-the-wiser.

From further lectures I was taught how to create my own RSS feed. I simply created my own basic web page with three news articles:

[via appendix]

I then went on to http://www.toucanmultimedia.com/rssmaker.php to create my RSS feed.

[via appendix]

After simply filling out the form, this website generated my RSS code:

[via appendix]

I then copied all my files into the local server. I opened my RSS file. It looked exactly how the ones I had signed up for on the web looked:

[via appendix]

My RSS worked as shown below. I couldn’t believe how simple it was.

[via appendix]

Something that I had again noticed that intrigued me was the use of the xml syntax. I had already previously discovered that RSS “is an xml-based web-content syndication format.” (http://www.businessweek.com/search/rssfeed.htm) Again xml was something that I had heard but knew practically nothing about. However after researching via http://www.w3schools.com/xml/xml_usedfor.asp I found out that xml (extensible mark-up language) can “be used to store data in files or in databases… [and] used to exchange data”. This clearly gives reason to why RSS feeds are written in an xml format as xml “create[s] data that can be read by different types of applications” (http://www.w3schools.com/xml/xml_usedfor.asp) as well as allowing applications to be composed in order to accumulation and acquire information. This is clearly a process/function needed for a user to receive updated news/articles or other information via RSS feeds.

RSS feeds are becoming more and more popular; “revolutionizing the way we search for content” (http://websearch.about.com/od/rsssocialbookmarks/f/rss.htm) as they can permit content to be seen where search engines and directories may not in actual fact find them as well as aid web owners in optimizing their sites and getting “their site noticed” (above website.) Although the above website does state that RSS feeds are a “wonderful resource” I do agree with the author of www.seebreezecomputers.com/tips/rss who states that when you click on the RSS or XML buttons to subscribe to a feed you are presented with “some useless code” rather than subscribing to the feed. Being familiar with computers, the internet and I.T in general, I was able to work this out however other users with little I.T knowledge/experience may not be able to. Subscribing to a feed in a more user friendly way would definitely be a huge improvement to this ‘wonderful resource’.

Sunday, 8 April 2007

To begin with I am a little unaware of what exactly a web server does and what makes a ‘good’ web server, therefore I will briefly investigate the basics before indulging in to the comparison of Apache and an alternative web server.

What makes a good Web server?

• One that quickly, reliably and securely transfers data between the client and the server via Hypertext Transfer Protocol (HTTP).• One that rapidly retrieves and delivers the requested files/scripts to the browser for the client.• Ultimately a good web server “serves content to web browsers on client machines” (http://www.siteground.com/apache%20servers-hosting.htm) consistently and promptly.

So from the information I found above I suspect that a logical diagram of a web server and the relationships with its neighbors is as follows:

[Via Appendix]

The “most popular WWW server on the internet” (http://web-hosting.candidinfo.com/server-operating-system.asp) is apparently Apache which is the first web server I will investigate.

From http://www.iuk.be/ist205/sess6.html I found that Apache is said to be a “powerful [and] flexible” web server that can be configured and customized to an extremely extensive amount as required by the user. This is proven by Tobias Schlitt (http://schlitt.info/applications/blog/index.php) that for a “large” file it only takes Apache “0.004” seconds to find and serve the demanded content to the client browser. This is undeniably very fast however Schlitt does not state what a “large” file is; this would be interesting to know in order to compare the number of bytes the file contains to the load time. Either way, Apache clearly ships pages at a rapid rate.

Furthermore, Apache can run on all popular operating systems including Windows XP, windows 2000/2003 as well as Netware, OS/2, NT, Linux and the majority of UNIX versions. Despite the fact that Apache is clearly a Multi-platform web server, Apache “runs best on Linux” (http://web-hosting.candidinfo.com/server-operating-system.asp) as it runs faster on this O.S. Considering this, if a user wanted to gain the most functionality out of this web server then they must obviously use a Linux O.S. This is a whole new issue within itself as Windows are more popular than Linux. Despite this, Apache still runs very well on all the other operating systems.

Apache is clearly a constantly improving web server as it (Apache 2.0) only used to run on “Unix based operating systems… [and]…on windows 2000” (http://www.shop-script.com/glossary.html) but clearly has been made far more flexible and is still “actively being developed.” (www.iUK.be)

Apache’s constant improvement and updates is born through its open source nature. This is undoubtedly an advantage not only due to its unbeatable cheapness but also for users who want to “add functionality.” (http://web-hosting.candidinfo.com/server-operating-system.asp) One can write “modules using Apache module API.” (www.iUK.be) There are two main areas to change information on the Apache web server;“• httpd Config files• Per-directory .htaccess files ” (http://www.garnetchaney.com/htaccess_tips_and_tricks.shtml)

• httpd Config files can only be entered by server administrators however .htaccess files can be accessed by users and put in to “their individual directories” (http://www.garnetchaney.com/htaccess_tips_and_tricks.shtml) which will then countermand the httpd Config files.

The above website noticeably gave me an incite in to why Apache allows users to configure it as well as administrators. On the other hand I didn’t know exactly what .htaccess files allowed users to do. So I went on to research more in to this Apache element. I found via http://www.freewebmasterhelp.com/tutorials/htaccess/ that .htaccess can provide a number of services; the most common being password protection for “specific files or directories” or the presented webpage when the requested file “is not found (error 404).” (http://www.free-webhosts.com/definition/htaccess.php)It is clear that Apache is a powerful web server. The .htaccess files, telling the “server how to behave” (http://www.free-webhosts.com/definition/htaccess.php), gives reason to why the user can configure the web sever almost exactly how they want it. There are alternative web servers to Apache that are within the competing market. These include Microsoft IIS, Sun, Zeus and thttpd. I have also found a currently strong contestant I had not heard about called LightTPD, which is also an open source web server. I believed that Apache was no doubt the best after my investigation however after research in to LightTPD my opinion has largely changed.

LightTPD is known for its good “security, speed, compliance, and flexibility… With a small memory footprint compared to other web-servers, effective management of the cpu-load, and advanced feature set (FastCGI…).” (http://www.lighttpd.net/) Admittedly, this was stated from the LightTPD website itself and so this could be rather biased, therefore I went on to look for other opinions.

Mark Andrachek (http://webmages.com/archives/2005/03/16/apache-alternative) states that “it’s much faster and lightweight than Apache… [and] also has Fast CGI support…and uses less than 4MB of ram.” This is noticeably smaller than that required by Apache which uses “220MB of memory” (http://forums.vpslink.com/archive/index.php/t-1033.html); that is the apache 2.0 version. This proves the “small memory footprint” statement by http://www.lighttpd.net/ themselves.

I found an interesting comparison; as stated earlier Apache can load a large file in 0.004 seconds however Tobias Schlitt (http://schlitt.info/applications/blog/index.php?) also discovered that LightTPD on the other hand loads the same large file in “0.001” seconds. LightTPD so far seems to be winning the better web server race. Although this may seem the case, others argue that LightTPD “is too fast” (Durgaprasad http://durgaprasad.wordpress.com/2006/09/28/lighttpd-vs-apache-http-server/b). I think there is just no pleasing some people.

Regardless, Durgaprasad goes on to say that it has an “excellent….ability to spawn fastcgi processes” which means that if the server becomes loaded with “heavy traffic” then the server can “automatically do the load balancing”. This confirms the statement by http://www.lighttpd.net/ that LightTPD is “perfect…for every server that is suffering load problems.”

Like Apache, LightTPD is open source and so can be configured as needed, however unlike Apache, LightTPD does not have aid for .htaccess files. This may be seen as a problem however although this gives Apache flexibility it does in fact “slow apache down” (http://webmages.com/archives). This suggests that LightTPD does not have that extra obstacle; Apache reads the httpd Config files and the .htaccess files which then tend to overrule the httpd Config files, whereas with LightTPD “everything is entered directly in to the main config file.” (http://webmages.com/archives/) This may be partly the cause for LightTPD having a faster server time than Apache. On top of this, although .htaccess files may be “overwritten very easily” (http://searchnetworking.techtarget.com/sDefinition/0,290660,sid7_gci214573,00.html) this can create issues “for users who once could access a directory's contents, but now cannot.” (Previous website). From the previous website it also states that .htaccess files can be “retrieved by unauthorized users” which clearly shows a glitch in the security of Apache.

Additionally, where Apache configuration uses the writing of a module, Vincent Delft says that LightTPD “has integrated support for scgi and doesn’t need an additional module” (http://mail.mems-exchange.org/durusmail/quixote-users/5663/). Apache does; a user must use mod_scgi if scgi is needed for Quixote applications.

LightTPD seems to be taking over security wise, speed wise and feature wise when juxtaposed with Apache. The fact that it powers a number of “popular Web 2.0 sites like You Tube, Wikipedia and meebo” (http://www.lighttpd.net/) simply proves its might, reliability and capability.

I then went on to look in to relational database management systems. I already knew that RDBMS are system software that store data across related tables where primary and foreign keys provide the link between them. I knew that DBMS’s manage data as well as access, retrieve, secure data and sustain its integrity. I had only worked with Microsoft access when creating databases in the past. I had heard of the term MySQL and knew it had something to do with RDBMS but did not know exactly what. So this was where I first launched my investigation.

To begin with I wanted to find out exactly what SQL stands for: that being ‘Structured Query Language’ and is said to be “the most popular computer language used to create, retrieve, update and delete…data from” (http://en.wikipedia.org/wiki/SQL) RDBMS. I went on to find out that MySQL is commonly used by “connecting to a MySQL server, choosing a database, and then using the SQL language to control the database.” (Andy Harris PHP 5/MySQL programming for the absolute beginner p.305) Immediately my knowledge of MySQL and its association with RDBMS was clear; it is simply a RDBMS itself, allowing “many different tables to be joined together” (Michael K. Glass Beginning PHP5, Apache, MySQL: web development p.7) like any other RDBMS.

As I was advised to investigate servers and then RDBMS, I first off didn’t understand the connection between the two. However, via Beginning PHP5, Apache, MySQL: web development p.7, I found that “MySQL is the database construct that enables PHP and Apache to work together to access and display data in a readable format to a browser.” I realised that when a user requests web data, PHP tells the server (i.e. Apache) and the server then gets the data from a RDBMS (i.e. MySQL). Now, I could go on to investigate MySQL in further detail.

MySQL is an open source RDBMS, which means for the user “lower cost & Total Cost of Ownership” (http://www.mysql.com/news-and-events/on-demand-webinars/embedding-oem-2005-09-22.php?gclid=CLSWi6Gb5YoCFQMrlAodGicIwQ ) as well as the fact that users can “tailor it to their needs” (http://www.google.co.uk/search?hl=en&q=define%3A+MySQL&meta) MySQL is apparently known for its “cross platform portability…superior performance, scalability and reliablity…small footprint…[and]…ease of use.” (http://www.mysql.com/) This perfected statement is not surprisingly from the MySQL website itself and so I went on to find out where the real glitches were, as there were bound to be some.

It was confirmed that MySQL “works on many operating systems and with many languages.” (Andy Harris PHP 5/MySQL programming for the absolute beginner p.303) due to its default table format. MySQL can run on Windows to UNIX systems however it works best on the latter. Furthermore, this website confirms the ‘small footprint’ statement as the MySQL databases are “compact on disk and use less memory and CPU cycles.” MySQL can even be used on 64-bit processors due to its use “of 64 integers in the database.” (http://www.tometasoftware.com/MySQL-5-vs-Microsoft-SQL-Server-2005.asp) The praised performance is noticeably due to its compactness, sleekness and efficiency.

With regards to efficiency, MySQL works fast and thoroughly “even with large data sets” (Andy Harris PHP 5/MySQL programming for the absolute beginner p.303) hence why it mounts effortlessly into “large, query-heavy databases” (http://www.tometasoftware.com/MySQL) and is said to be created for arduous loads and dealing with “complex queries” (Michael K. Glass Beginning PHP5, Apache, MySQL: web development p.7) MySQL is so far proving to be the “very powerful program” stated by Andy Harris. The MySQL website seems not to be hiding any faults, as yet.

Moreover, I found that MySQL serves “core functionality…require[ed] at a very low cost.” (http://www.tometasoftware.com/MySQL) MySQL being open source is obviously free itself to download however there are costs applied when GPL license limitations want to be avoided. Yet these cost merely $400 which is nothing when compared to commercial DBMS’s which I will discuss in a short while. So far MySQL still seems to be meeting the standards first set.

What is more, with MySQL, replication is positively supported as it can be done “easily and quickly”(http://www.tometasoftware.com/MySQL) to a number of slave machines. This suggests that even when the server fails data is kept unharmed. However, finally, to my expectancy, I found out that MySQL does have its faults. Although data may be reserved unbroken and features “password and user verification…for added security” (Michael K. Glass Beginning PHP5, Apache, MySQL: web development p.7), the same website above reveals that its basic table security support is restricted and it does not have “adequate security for government applications.” This clearly portrays that MySQL can not be used in government systems hence why they are better suited for “lower-tier” applications (last two quotes: http://www.tometasoftware.com/MySQL) as opposed to enterprise level ones.

Regardless of the intact data when the server shuts down, with MySQL, if there is unforeseen power failure “data can be lost and the data store corrupted” (http://www.tometasoftware.com/MySQL) Undeniably MySQL’s recovery is very poor, which is clearly a concern as its key job is to look after data. Another example of MySQL’s poor data management is that, although it may seem an advantage that data types are adaptable when they’re entered, “you can [also] enter dates that are not really days, such as February 30…[and]…store dates with missing information” (Julie C. Meloni Sams teach yourself PHP, MYSQL and Apache, p.275) This obviously means that data redundancy is a very high risk. This is ridiculous; MySQL could at least manage simple validated dates.

What’s more, this RDBMS may have “a number of utility programs” (Andy Harris PHP 5/MySQL programming for the absolute beginner p.304) and may have implemented some advanced new features (cursor support, stored procedures, triggers and foreign keys) however these are in the waiting of being “stabilized and rationalized” (http://www.tometasoftware.com/MySQL) So although it serves “core functionality” as earlier quoted, this does not mean that the operations are faultless. In fact the features need to be restructured across a number of the MySQL suites including “InnoDB, MyISAM, MaxDB and the new data clusters” (http://www.tometasoftware.com/MySQL)

MySQL started off as the son of RDBMS perfection however it now proves to have many imperfections highlighting the statement that “MySQL is nowhere near the competitive enterprise field of the more established SQL server” (http://www.tometasoftware.com/MySQL)

I then went on to find an alternative to this RDBMS, one of which was not open source and so I expected will have a lot less faults as money is now in the equation. I found via www.microsoft.com/sql/default.mspx that there is a RDBMS called Microsoft SQL server. Apparently this SQL server version (2005) provides a “business intelligence platform” designed for data absorption and examination to “make better decisions, faster.” In addition this website also suggest that Micro SQL operates “applications faster and more efficiently.” As well as providing “secure default setting”. So far this does not seem to have that much above MySQL.

It’s also stated (via http://www.microsoft.com/sql/prodinfo/overview/whats-new-in-sqlserver2005.mspx) that their SQL server has “reduced application downtime, increased scalability and performance, and tight yet flexible security controls”. As before, this faultless statement to no surprise is drawn from Microsoft themselves; I will venture in to the hidden truths about this ‘perfect’ RDBMS and compare the two.

To begin with the most obvious disadvantage as already mentioned is the fact that this RDBMS is not free and only offers a “free license for development use only.” (http://www.tometasoftware.com/MySQL) The actual cost of purchasing Micro SQL is a “whopping $1, 400” however it is said to be worthy of this fat price as I am yet to find out.

Via http://www.hostmysite.com/support/sql/whatsnew; although Micro SQL is a commercial system it does allow users via the “SQL Management Tools” to “customize and extend their management environment and [ISVs]” in order to create more tools and functions they may require. I first thought customization was only available with open source products yet this has clearly set my knowledge straight and a noticeable advantage MySQL had over Micro SQL is in fact deceased.

Unlike MySQL, Micro SQL’s advanced features that have been wholly executed have “long stabilized” (http://www.tometasoftware.com/MySQL). There is no reconstruction needed, which gives reason to it being in the “high-end of database systems”. (http://www.tometasoftware.com/MySQL)

Not to get over excited; I then found out that these wonderful, overwhelming features means that the system is overall more intricate and therefore puts more strain on the “memory and hard disk storage…[resulting in] poorer performance” (http://www.tometasoftware.com/MySQL) than MySQL. For one to benefit from such a complex system a large, powerful and committed hard drive is most definitely needed. This can be a turn off as this requires even more time and money on top of the purchase of the SQL server. However, one could see this as Micro SQL not being the problem as it is “limited only by hardware and application design.” (SQL Server Technical Article http://download.microsoft.com/download) It is not itself a limitation.

Micro SQL, much like MySQL, assists data duplication. However, SQL does so in three diverse ways (“snapshot, transactional and merge.”) (http://www.tometasoftware.com/MySQL) where the snapshot method caters for static databases; those that hardly ever change; transactional method that caters for those that are constantly changing and the merge method that permits “simultaneous changes”. The above website also conveys that if there are changes that clash then a “predefined conflict resolution algorithm” will solve the issue. Via www.google.com/search I learned that algorithms are used to “analyze data into components.” If there is data that does not match up, the algorithm will analyze the data and correctly change it and/or put it in to its appropriate component. This is a major advantage over MySQL’s single method.

The above also suggests that there is low risk of data redundancy, unlike MySQL. This is proven by the statement via www.tometasoftware.com that this SQL server completely provides for “security at the column level.” The fact that MySQL only does so at the ‘table level’ undeniably shows that this is a much less secure system. This gives reason to why Micro SQL manages data security, swift reinstatement and has less potential of data distortion.

On the subject of security, Micro SQL is worthy of the C-2 compliant certificate; sufficient government application security, meaning that it can be used in government systems not just small businesses. In fact, it is highly unlikely that small companies will purchase this system due to its heavy price tag when free systems such as MySQL are available. This gives reason to why Micro SQL is used for “large enterprise databases” (http://www.aspfree.com/c/b/MS-SQL-Server/) as opposed to mere diminutive to medium scoped databases like MySQL.

Ok, so the MySQL server is said to provide ‘ease of use’ as stated at the beginning by its own website, however, Micro SQL can be incorporated with “Microsoft Visual Basic .NET, and Microsoft Visual C# .NET” (http://www.hostmysite.com/support/sql/whatsnew) meaning that creating code is possible without the user having to know complex SQL elements. The integration of the .Net framework also permits delivered “security, scalability, and availability for your enterprise data and analytical applications” (http://www.re-invent.com/sqldatabasehosting/sql2005.aspx) Nevertheless, one must be trained in the “elaborate mechanisms” of this SQL server in order to replicate and transfer dynamic data.

Overall, it is clear that both RDBM systems have their pros and cons. Where MySQL has its cons this is often excused due to its unbeatable price tag and availability, and where Micro SQL server has its heavy price tag this is excused due to its overall better features, security, recovery and more intelligent platform. However, it must be admitted that overall Microsoft SQL is the “more secure, reliable, and productive platform for enterprise data and BI applications” (http://www.vmware.com/vmtn/appliances/directory/node/651) than the free MySQL. If a large business requires a robust, reliable and intelligent RDBMS then Microsoft SQL “wins hands down.” (www.tometasoftware.com)

REFERENCES:

Websites:

What are some popular Web server programs?http://web-hosting.candidinfo.com/server-operating-system.aspAuthor: Unknown[Date accessed: 07/03/07]

Tuesday, 27 February 2007

I really did enjoy indulgding in to my first learning log. I was already half way through when we were told of the changes...we were now being given specific titles and told it was to be 'non-technical'. I wasn't sure whether this was a good thing or a bad thing at first. I had already started looking in to the structure of CMSs and blogs and the use of PHP within them so I guess I was more dissapointed in the changes; I was already touching on more the technical side!

I've found that I really do enjoy independant learning. I get a 'buzz' (if you can call it that) when I find connections between what I am experiencing/practising and what I have found out theory-wise through research. I'm looking forward to diving in to my second learning log... (but I hope i can get away with sneeking in some technical learning! Shshshs!)

Monday, 26 February 2007

Both blogs and content management systems (CMS) “invite social interactivity” and one is able to “leave comments, register as a user or…become a contributor” (www.unfoldingneurons.com) however I want to investigate what the difference is between the two with regards to purpose and how they work. I will mainly focus on CMS’s and will refer to comparisons of blogs.

I logged on to www.blogger.com which is obviously where I could create my own blog and signed up as a user. I had to enter personal details in to the form presented. It was that simple. I was then able to log on and enter my own content, via an administrative frontier, that was then presented on to the displayed web page that the rest of the world sees. Similarly, I logged on to www.myspace.com which is said to be a CMS and then signed up as a user. Likewise I had to fill in a form of my personal details. Once this was done I was then able to add more information, as I did in the blog, via the modules and “blocks [that] are added to build the website.” (www.unfoldingneurons.com) Where CMS’s have many modules and blocks already, blogs merely have “one module (which is the core)” and have the option to add more blocks such as widgets or plugins.

After adding content to my CMS via the administrative interface I opened the source from the displayed profile page in an attempt to find the information I imputed via the administrative page; the ‘About me’ section:

[SOURCE CODE via my appendix]

Although I did find it as expected I was still rather confused as to how exactly these pages all interlink and work together; particularly how the administrative page works with the actual displayed profile page. Baring this in mind I decided to go back to the administrative page. I viewed the source to search for the ‘about me’ section.

[SOURCE CODE via my appendix]

Both the source of the admin page and the source of the display page held my added content and on both I could edit my content. I carried out the same test with my blog: I viewed the source of both the admin page and the display page. This time I edited my blog entry in both and saved it. When I opened it as an html page the content had changed.

[Print screens via my Appenix]

Regardless, what I realised was that although the displayed webpage did change due to the source change, this page could only be viewed locally on my desktop. The changes were not actually made to the broadcasted webpage. Changes could only be made via the administrative interface.

Furthermore when I tried to compare the source from my administrative interface for my blog, as I was writing a new blog entry, what I had written wasn’t appearing in the source. However once I saved the entry and viewed the source again, whilst still in the admin page, it did then appear!

Things were starting to become clearer; I could now make links to what I was experiencing and what I had found via my research:The diagram below from http://www.steptwo.com.au/papers/kmc_what/index.html shows the basic way in which a CMS (and a blog for that matter) work.

[Diagram Via my Appendix]

Clearly I have experienced that the content creation and the presentation of my profile are at two ends of the spectrum where the CMS manages the link between the two. “Once a page has been created, it is saved into a central repository in the CMS”. This gives reason to why the blog entry only appeared in the source once I clicked ‘save’. This is also how the information that I type in via the admin interface is then loaded up for presentation, hence why the ‘about me’ section was in both the source of the admin page AND the display page. “The CMS will build the site navigation for you, by reading the structure straight out of the content repository.” No HTML skills are required as the admin interface automatically creates the code for you and implements your content, hence why when I edit information via the admin interface it will then automatically appear in the source once the changes have been saved. It is simply a user friendly access method to changing the source code non-technically.

When I compared both sources (admin vs. display page) there was a clear difference between the two with regards to the ‘About me’ section in my CMS. The admin pages clearly used PHP (the presented page did not in the ‘about me’ section):

This is clearly PHP as recognized through the use of the dollar signs; “the dollar sign in PHP is used to represent variables rather than money. \$5.00 could also be written as '$5'.” (http://www.webreference.com/programming/php/by_example/2.html) Clearly the highlighted coding conveys the location of the area of text titled ‘About me’; hence $Main$ProfileEditContent (etcetera)

On the other hand, I could not detect any PHP in the source for the blog admin page yet I could still notice a very apparent difference when I compared it to the source of the blog display page:

[Source Codes via my Appendix]

Clearly there is a huge difference between the code for the admin page and the code for the display page. The admin page consists of all the detailed HTML and JavaScript that determines what the display page will look like hence why it will also determine the source code for the display page. Additionally, I also found out that once you embed PHP in to a HTML script, save it and then “view the document source…the listing would look exactly like a normal HTML document” (Meloni C. J PHP, MySQL and Apache 3rd ed. P.74) This undoubtedly gives reason to why I can find hardly any PHP scripting.

Additionally, within the source for the blog admin page (not the CMS) I also noticed the following line of code:_widgetManager._Registerwidget…

Undeniably, this brings more of my research to light; as blogs do not have many modules and blocks like CMSs, blogs can however add more plugins or widgets hence the above code. I have not yet found out what widget this is referring to in my blog, as I am still navigating my way through the basic similarities and differences in the components of CMSs and blogs through my own exploration and research.

When I edited my profile on www.myspace.com I was able to add music, pictures and videos by planting the code in to one of the blocks via the admin page (unlike in my blog where I could only add text and pictures). The language used to create these interactive websites is PHP which makes sense as it can be “embedded or combined with the HTML or a website” (W. Hugh and L. David, PHP and MySQL, p.16)

Therefore I searched the source of my CMS display page for the use of PHP in order to see whether it had been used for any of the multimedia on my profile. The first I found was within a hyper reference:

http://www.urbnmix2.net/video.php?id=adam_sandler_grow_old_with_you

I copied this in to a new window browser and found that it linked to a whole new website. The only PHP used referred to a number of these ‘urbanmix2.net/video’ links.

[Source code via Appendix]

I found these links within the comments added, by other users, on my profile. This makes sense as this is where the videos were implemented. “PHP's ability includes outputting images, PDF files and even Flash movies” (http://uk.php.net/manual/en/intro-whatcando.php)

Clearly this also demonstrates what I have learned that PHP is often used to incorporate “dynamic content derived from user input” (W. Hugh and L. David, PHP and MySQL, p.18) On top of this, the fact that I could find only the .php URLs and not the actual PHP coding for these elements proves what Larry Ullman states that “PHP scripts need to be parsed by the server…you absolutely must access PHP scripts via the URL. You cannot simply open them in your web browser.” (PHP and MySQL for dynamic web sites 2nd ed. p.5)

I then attempted to change the look of my ‘myspace’ profile by using a template. Again I simply copied the relevant code in to one of the modules. This gives reason to my research that states that the majority of CMS’s have a “main core…[where] these various ‘modules’ and ‘blocks’ are added…and then skinned by a theming / templating system.” (www.unfoldingneurons.com) This is clearly what allows the user to embed their contributed content in to different templates as the blocks/modules can be extracted and transferred, “leaving existing content and page architecture untouched… [as] the CMS will pull the content into the new look” (http://typo3.com/What_is_a_CMS.1351.0.html) On the other hand, I was unable to completely change the layout of my blog as “generally speaking the components [structure] stays the same” as a blog “is usually a core defaulting to a certain layout”. (http://www.unfoldingneurons.com/2007/cms-vs-blogno-you-dont-need-pepto-bismol) hence the one module that it consists of as opposed to many.

I’ve learned the basics of how blogs and CMS’s are similarly and differently structured and the basics of how they work using “content display sections” and modules via an “administrative interface.” (http://www.unfoldingneurons.com/2007/cms-vs-blogno-you-dont-need-pepto-bismol) Blogs are merely a very simple version of a CMS; practically an element of a CMS hence; a CMS can include a ‘blog’ along with plenty more elements due to its more flexible foundation; its larger number of modules and blocks. Blogs “usually [have] one purpose”: to share information, whereas a CMS can have a number of purposes from being a “community hub via forums" (http://www.unfoldingneurons.com/2007/cms-vs-blogno-you-dont-need-pepto-bismol) to selling products.

Where I came across some PHP I have also learned some basic PHP language as demonstrated in my appendix and how it is embedded in to HTML but can not be viewed via the document source.

In this learning log I have touched up on a number of factors relating to CMS’s and Blogs; form simply changing content via the administrative interface to investigating in to the basics of the language used to create them; PHP. As this is my first learning log, I have only surfaced such areas as I knew little about the whole subject. This has given me a basic foundation of which I can home in on one of the surfaced areas and really deepen my knowledge on that specific matter later on.

I have found that I most took an interest in the PHP language and how it is used within HTML pages yet hidden in the document source. Although this was probably the most technical and complex finding of this learning log, PHP for me is the most intriguing aspect of my findings and I plan to indulge in a learning log that focuses just on this later on.

Thursday, 22 February 2007

Both blogs and content management systems (CMS) “invite social interactivity” and one is able to “leave comments, register as a user or…become a contributor” (www.unfoldingneurons.com) however I want to investigate what the difference is between the two with regards to purpose and how they work. I will mainly focus on CMS’s and will refer to comparisons of blogs.

I logged on to www.wordpress.com which is a blog and signed up as a user. I had to enter personal details in to the form presented. It was that simple. I was then able to log on and enter my own content, via an administrative frontier, that was then displayed on the web page that the rest of the world sees. Similarly, I logged on to www.myspace.com which is said to be a CMS and then signed up as a user. Likewise I had to fill in a form of my personal details. Once this was done I was then able to add more information, as I did in the blog, via the modules and “blocks [that] are added to build the website” (www.unfoldingneurons.com) Where CMS’s have many modules and blocks already, blogs merely have “one module (which is the core)” and have the option to add more blocks such as widgets or plugins.

After adding content to my CMS via the administrative interface I opened the source from the displayed profile page in an attempt to find the information I imputed via the administrative page and to try to change the ‘About me’ section:

It did change as intended on my displayed profile. At this point I became rather confused as to how exactly these pages all interlink and work together; particularly how the administrative page works with the actual displayed profile page. Baring this in mind I decided to go back to the administrative page. I viewed the source to search for the ‘about me’ section. Here, as with the source code for the actual displayed profile page and the administrative interface page, I could edit the information. Both the source of the admin page and the source of the display page held my added content and on both I could edit my content. Things were starting to become clearer; I could now make links to what I was experiencing and what I had found via my research:The diagram below from http://www.steptwo.com.au/papers/kmc_what/index.html shows the basic way in which a CMS (and a blog for that matter) work.

Clearly I have experienced that the content creation and the presentation of my profile are at two ends of the spectrum where the CMS manages the link between the two. “Once a page has been created, it is saved into a central repository in the CMS”. This is how the information that I type in via the admin interface is then loaded up for presentation, hence why the ‘about me’ section was in both the source of the admin page AND the display page. “The CMS will build the site navigation for you, by reading the structure straight out of the content repository.” No HTML skills are required as the admin interface automatically creates the code for you and implements your content, hence why when I edit information via the admin interface it will then automatically appear in the source. It is simply a user friendly access method to changing the source code non-technically.

When I edited my profile on www.myspace.com I was able to add music, pictures and videos by planting the code in to one of the blocks via the admin page. The language used to create these interactive websites is PHP which makes sense as it can be “embedded or combined with the HTML or a website” (W. Hugh and L. David, PHP and MySQL, p.16)

Therefore I searched the source of my CMS display page for the use of PHP in order to see whether it had been used for any of the multimedia on my profile. The first I found was within a hyper reference:

I copied this in to a new window browser and found that it linked to a whole new website. The only PHP used referred to a number of these ‘urbanmix2.net/video’ links.

I found these links within the comments added, by other users, on my profile. It makes sense that this is the only place where PHP was used as this is where the videos were implemented. “PHP's ability includes outputting images, PDF files and even Flash movies” (http://uk.php.net/manual/en/intro-whatcando.php 2007)

Clearly this demonstrates what I have learned that PHP is often used within CMS’s and blogs to incorporate “dynamic content derived from user input” (W. Hugh and L. David, PHP and MySQL, p.18)

I then attempted to change the look of my ‘myspace’ profile by using a template. Again I simply copied the relevant code in to one of the modules. This gives reason to my research that states that the majority of CMS’s have a “main core…[where] these various ‘modules’ and ‘blocks’ are added…and then skinned by a theming / templating system.” (www.unfoldingneurons.com) This is clearly what allows the user to embed their contributed content in to different templates as the blocks/modules can be extracted and transferred, “leaving existing content and page architecture untouched… [as] the CMS will pull the content into the new look” (http://typo3.com/What_is_a_CMS.1351.0.html) On the other hand, I was unable to completely change the layout of my blog as “generally speaking the components [structure] stays the same” as a blog “is usually a core defaulting to a certain layout”. (http://www.unfoldingneurons.com/2007/cms-vs-blogno-you-dont-need-pepto-bismol) hence the one module that it consists of as opposed to many.

I’ve learned the basics of how blogs and CMS’s are similarly and differently structured and the basics of how they work using “content display sections” and modules via an “administrative interface.” (http://www.unfoldingneurons.com/2007/cms-vs-blogno-you-dont-need-pepto-bismol) Blogs are merely a very simple version of a CMS; practically an element of a CMS hence; a CMS can include a ‘blog’ along with plenty more elements due to its more flexible foundation; its larger number of modules and blocks. Blogs “usually [have] one purpose”: to share information, whereas a CMS can have a number of purposes including selling products.

Where I came across some PHP I have also learned some basic PHP language as demonstrated in my appendix.