Monday, April 30, 2012

Ponders the fact that all the computers in a massively multi-player game is effectively one of the largest super computers in the world; wonders how much work could be offloaded from the servers to the clients. Make all the clients use a bittorrent like protocols to download updates and textures between machines on closer networks.

Sunday, April 29, 2012

I've done a lot of data processing over the years and have come to the following understanding of how data processing words at a general level. This concept is what I am planning on using for several batch and message processing projects I wish to create.

Data level

Data can come from many sources. The program has to open a file, a database connection, a serial port, a network port, or other device and begin reading in a stream of data. At this level the data is an almost meaningless stream of single bytes.

Format level

These bytes are organized in a specific pattern known as a format. There are many different formats that the data can be organized around.

Fixed length. Each field can be in a strict order, each with a fixed length, so that each record you read will be the sum of those fixed fields. Typically there will be a special byte with an end of line significance, typically a newline or a carriage return, but with this format the record separator is optional. This is how the IP and TCP headers come in a data packet at layers 4 and 5. Each byte and even each bit can have a specific positional meaning. If you look at a set of data and you can set it to be 80 columns wide in a text editor and suddenly you see beginning of last names all line up right down the page at column 20, each name followed by spaces until another column starts all lined up at column 32.

Delimited. Each field is followed by a delimiter, typically a comma or a tab and there is an end of record marker that separates each record from each other, typically a newline or a carriage return as with some of the fixed length format above. Typically the fields do still have a maximum length, or range of values, but this is not visible from the format itself. Typically you can spot this format by seeing the commas or the tabs in the data, typically every record will have a fixed count of commas or tabs in every record.

Mixed. A message, or record, can be a combination of the above. The fields can mostly be delimited with commas or tabs, but have a few fields whose contents have a fixed. HL7 is an example of a mixed format.

Grammar. This used to be much more difficult than it is now. Typically this means they used XML now. In the past people would create many different formats for data that was contextual in nature. If you are trying to parse text that comes from a command line, or a language like English, or a program file written in C or Java, then your parser will have to understand that combination of positional text whose meaning is determined by the initial state and the order of the commands.

Conceptual Layer

At this point you have read in the stream of bytes, given groups of those bytes meaning and stored the data into a record or other data object in your program. A reference to this data can be passed around to represent that stored set of meaning.

Translation and Routing

Often the data you received has fields in the wrong order, or you have a set of numbers from 1-5 that actually represents a user name. This layer will take the incoming data and create a new record in the new format, transferring and transforming the data from one data object to the other. Or an xml file you parsed has a 100 records that need to be pulled out of the object and 100 individual records sent to the next layer. So this layer would have a loop that lets you get one data value and create as many objects as you need. A single message might be split into multiple outbound messages.

Data Store

The data coming out of the translation layer will need to be mapped to a set of outbound data objects. One stream might go one direction, while another set of messages goes to another table. This operation has to be tied to a database transaction, so that either all the data is applied to the database, or none of it is. Or you can have an exception log that others have to check and correct later.

Data View

In order to see this data you can map a view onto one or more data objects and see records in the data view. The data view can represent the underlying data objects in many ways. It also only has to retrieve what it needs to fill the current set of records in the view, so a dataview for a million element database might only have to load in the first 10 elements. This data view could even be aliased across to another computer and still only has to cache a little data to represent many records.

In order for two computer systems to transfer data between them the information has to be wrapped up at each layer, transmitted across the physical wire or with radio waves. On the receiving side each layer will be unwrapped in turn so that the correct process gets the right information into the correct place.

There is no guarantee that any packet will get across the physical media. So at level 4 and above you can manage the transport layer to ensure that the receiving computer has gotten all the information you are trying to send, or at least you will know the data was not received.

Most applications will also send acknowledgements across the connection so that the application can be sure that the data has been placed into a database on the other side of the connection.

There are often many ways to look at how to build and manage the project files, as well as the output from the project, the intermediary and release files. It is better to keep the intermediaries out of the source tree, to keep it clean and small. You can accidentally check in files that should not be included if you are not careful. Best to keep them out of the tree altogether.
Then there is the actual output from your work; the libraries, modules and programs you wish to release. If these are in their own folder organized how they need to be then you can just zip or tar.bz2 them up easily. You can even add a target level at that point and use a cross complier to release for many platforms at one go.
You may also need to add in a special build for code that uses the compiler to build a special version of
There are three levels of indirection that apply to the hard set calculations of where the path goes.
Level 1 is: where does the project go when you save out a new one you create?
Level 2 is: where is the project file in relation to the root of the project file? IE... where is the source code?
Level 3 is: where do my intermediate and builds go in relation to the project root?
/Builds
/Targets
/Debug
/Release
/Clang
/Intermediates
/Targets
/Debug
/Release
/Clang
/MainProject-SDK
/LoadableModules/ProjectRoot
/platform
/os
/buildsystem
/target
/ProjectFile
Where target is removed if a certain os only has a single target ever.

Saturday, April 28, 2012

My boss came to me and
said "Jim, I need a fax machine in our new out-of-state office that can
refax the orders to us here at the main office."
I promised that I would look into the possibilities
immediately. I started by calling my main parts vendor but was unable
to find anything at the time for either Windows 95 or Windows NT that was
a multi-line fax server and could also refax the received faxes to our
main office.
I had been using a small utility with my home
Linux system and had configured efax to work with one modem line.
Sure enough the main controlling program was just a shell script and was
extremely well documented with comments. In fact there were more
lines of comments than of code.
After messing around that night faxing things
back and forth from my home and the office I decided that efax could do
the job. I went to my boss and told him that there were no commercial
products that would do what he wanted, but that I had found a solution
that would work. What he wanted to do was a minor problem with UNIX
because controlling banks of serial devices is one of the reasons that
UNIX was written by AT&T in the first place.
He was hesitant to go with a UNIX solution
until I assured him that a company called Caldera was working with Novell
to provide a UNIX operating system that would run on regular IBM compatible
hardware and be interoperate with our network. He gave me the go
ahead.
Since the system was going to be in an other
state I decided to go with new, powerful equipment so as to have as few
problems as possible. I got the quotes that I needed and ordered a copy
of Caldera 1.0, 100 MHz Pentium, 16 MB RAM, 1.2 GB HD, mini tower
case, NE2000 combo network card, 4x EIDE CR-ROM player, 8 port ISA Cyclades
serial board with octopus cables, 4 USR Sportster 14.4 external modems,
and an HP 5 laser printer. The grand total came up to about $2000.00.
All the parts came in by the second day and
I assembled the hardware in a few minutes. I then placed the Caldera
CD-ROM in the player and the boot disk in the floppy drive and quickly
ran through the setup to have an operational OS in less than 10 minutes.
This was the first time that I had ever used Redhat and I haven't used
anything else since then for any full installs.
The system didn't recognize the Cyclades serial
card. No problem, I was an old hand at reconfiguring and installing
the kernel. Still didn't recognize the Cyclades serial card.
I double check everything that I had done and it was correct. I called
Cyclades technical support and they knew exactly what was wrong and directed
me to a patch on the internet. I got the patch and had full instructions.
I applied the patch, setup the /etc/rc.d/rc.serial file to recognize the
new ports, created the new devices in the /dev directory with a supplied
script and remade the kernel.
With problems I had the entire system operational
inside of two hours. It even installed Samba print and file server
and the Apache web server and they were running with no configuration on
my part.
Now came the tough part. I copied the
/usr/bin/fax script to /usr/bin/faxa. Then I modified the /usr/bin/faxa
script as follows:
I changed the
DEV=modem
to
DEV=ttyS0
I changed the
NAME=""
to
NAME="name of our company"
I corrected the phone number to report the phone number that it was
going to answer at. I also fixed the log name and received fax names
to give each a unique name by prepending an 'a' to these names.
Then I made sure that /usr/bin/faxa worked
in that it would answer the line receive the fax, print the fax locally,
retransmit the fax to our bank of fax machines in the office and then move
this file to a done directory.
Finally I copied /usr/bin/faxa to /usr/bin/faxb,
/usr/bin/faxc and /usr/bin/faxd and corrected each of these new files so
that DEV= ttyS1, ttyS2 and ttyS3 respectively. They each prepended the log file name and received fax file names with 'b', 'c' and
'd' respectively.
I spent the next two days testing the system
to ensure that it would work without any hardware or software problems.
The following day I made the 3 hour drive and installed the system.
I had to modify the print portion of the /usr/bin/fax[a|b|c|d] scripts
in order to fix a glitch in the information that was being sent to the
system from some old fax machines. The efax machine was rescaling
the faxes so that one page faxed would fit on one printed page.
We were getting a postage stamp printed page in the upper left corner with
a long thin horizontal line clear across the page. I modified the
faxa, faxb, faxc and faxd scripts to trim the page to 8.5x11 and cleared
up the problem.
In less that a week I had researched, developed
and implemented a multi-line fax server. I never saw the utilization
go below 85% on that box and there was always plenty of free memory.
And the thing that amazed me is that we were only scratching the surface
of the power that having a Linux system could provide.
The system worked with only a few minor user
glitches during the next nine months that I worked at that company.
Once the phone lines became messed up. The users forgot to put paper
in the printer upon occasion or didn't change the toner until two days
after the printer ran out. "Oh, you mean it's _supposed_ to have
stuff printed on the page?"
Even a complete unexpected power down would
only cause the system to come back online reprint and refax what it hadn't
moved to the done directory and then continue on answering the fax lines.
Please note that this is the default behavior of the operating system and
efax, I didn't have to do anything special to get this robustness.
Next I tell you how I automated putting the
received faxes into a web server.

Nmap, wireshark, and other programs use Lua as a scripting language to enable users to extend the functionality of the program. Seems that Lua is very popular in security programs. As popular as Tcl in the hospital environment.

The Nmap Scripting Engine (NSE) is one of Nmap's most
powerful and flexible features. It allows users to write (and
share) simple scripts to automate a wide variety of networking
tasks. Those scripts are then executed in parallel with the speed
and efficiency you expect from Nmap. Users can rely on the
growing and diverse set of scripts distributed with Nmap, or write
their own to meet custom needs.

In the end, Lua excelled in all of our criteria.
It is small, distributed under the liberal MIT open source license, has
coroutines for efficient parallel script
execution, was designed with embeddability in mind, has
excellent documentation, and is actively developed by a large
and committed community.
Lua is now even embedded in other popular open source security tools including
the Wireshark sniffer and Snort IDS.

Learning the Lua Language

From the link:
This tutorial is aimed at all newcomers to the language Lua. We start
off with where to find relevant introductory material and then progress
to using the language with tutorials in the TutorialDirectory. The style is directed at newcomers to scripting languages, as well as newcomers to Lua. Common uses of Lua are:

Clang Static Analyzer

Currently it can be run either as a standalone tool or within Xcode. The standalone tool is invoked from the command-line, and is intended to be run in tandem with a build of a codebase.

The analyzer is 100% open source and is part of the Clang project. Like the rest of Clang, the analyzer is implemented as a C++ library that can be used by other tools and applications.

What is Static Analysis?

The term "static analysis" is conflated, but here we use it to mean a collection of algorithms and techniques used to analyze source code in order to automatically find bugs. The idea is similar in spirit to compiler warnings (which can be useful for finding coding errors) but to take that idea a step further and find bugs that are traditionally found using run-time debugging techniques such as testing.

Static analysis bug-finding tools have evolved over the last several decades from basic syntactic checkers to those that find deep bugs by reasoning about the semantics of code. The goal of the Clang Static Analyzer is to provide a industrial-quality static analysis framework for analyzing C and Objective-C programs that is freely available, extensible, and has a high quality of implementation.

This means that this is an open source program that is designed to analyze code without running the code. Clang reads the source code of the program you are working with in order to tell you what problems the rule set finds in the code.

Several open source projects are interested in using Clang to make their code base better. In my opinion, Clang is only part of an overall solution that includes unit testing of interfaces combined with high level integration testing of the overall released project. But it is better than no testing at all.

Since the primary target for Clang is OSX I have to download and follow the instructions here: For other platforms, such as my Ubuntu system the web site lead me to this page: http://clang.llvm.org/get_started.html#build

I followed those directions and got an executable to begin testing against source code after just a few hours.

I cleared out all the .o instead of just nmap's .o and found this bug:

linear.cpp:1092:9: warning: ‘loss_old’ may be used uninitialized in this function
linear.cpp:1090:9: warning: ‘Gnorm1_init’ may be used uninitialized in this function
linear.cpp:1376:9: warning: ‘Gnorm1_init’ may be used uninitialized in this function
linear.cpp:1805:15: warning: Call to 'malloc' has an allocation size of 0 bytes
int *start = Malloc(int,nr_class);
^~~~~~~~~~~~~~~~~~~~
linear.cpp:21:32: note: expanded from macro 'Malloc'
#define Malloc(type,n) (type *)malloc((n)*sizeof(type))
^ ~~~~~~~~~~~~~~~~
linear.cpp:2000:30: warning: Assigned value is garbage or undefined
model_->w[j*nr_class+i] = w[j];
^ ~~~~

This one actually seems more serious, if true.

evidently nr_class is 0 there which causes malloc to allocate no bytes to "start".

I set the -k option to keep going, and am letting everything run for as long as it needs to run.

About 6 hours later I checked and it had finished up. The reports are very complete:

This is an example of one of the errors, but as you can see the NULL does appear to be checked in the previous. The documentation talks about using asserts to remove some of these errors in a debug build.

Overall this does look like an interesting additional tool to use in addition to other tools.

Because it makes a copy of the file for each bug report, because the paths to the failed branches it makes a copy of a file for each bug report.

The compressed size of the reports was 3.7 MB, and the uncompressed size was 47 MB.

And I could find nowhere an explanation of what the fields were representing. I know a little about tcp/ip, so I know that ttl stands for time to live, iplen is the length of the ip packet, and sequence is the packet sequence number that is assigned to every packet by the sender of the packet. However I am not positive what is meant by id, win, or what the letters 'R', 'S', and 'A' represent.

If you are going to be doing much development on the software then you need to download the software and begin reading the code.

I actually just downloaded the source tarball for the current and development versions and did the .configure;make on them and they compiled just fine. The tarballs for source and compiled versions are available here: http://nmap.org/download.html source is second section down.

If you are not part of the core team then you can't check changes back into the main branch. The changes you make will just be for your own use. If you would like you could post diffs to the main dev mailing lists for discussion and inclusion to the main code base. This nmap dev mailing list archive is here: http://seclists.org/nmap-dev/ And you can subscribe to the list here:

And even if you were part of the core team it would probably be bad to check things in directly to the main branch without extensive testing and having things reviewed by others.

I am still in process of creating a branch to work in the main repository. If I remember correctly creating a branch in svn is the same as making a low cost copy internally to a new location inside the svn database. Which should be a command similar to this:

http://svnbook.red-bean.com/en/1.0/re07.html

svn copy SRC DST

Still working out exactly what the SRC and DST parts will be. I'm thinking it will be this:

I had a little scare when I created the svn directory, it defaulted to the user name on my system, I hit return and it then asked for username and password which it seems to have cached for that host, which is nice. I had to install autoconf to get the compile to work as well.

These values form a unique combination that exactly match this and only this connect on the entire Internet.

Known ports are considered to be those ports where a server can accept multiple connections from many clients. Telnet has a know port of 23. Any server which accepts multiple connections on a single port is considered to have a known port, because this is the port that is known by all its clients.

Note that even though the telnet server has accepted two connections on port 23 that each client was randomly given a different port number and this slight difference is all that is needed to uniquely differentiate the two connections from each other.

This is not to be confused with the /etc/services file which reflect those ports which are assigned by the Internet Assigned Numbers Authority. Many of these ports are know ports, but that because they accept multiple connections on a single port, not because they are in the services file.

Cloverleaf doesn't use a known port to accept connections. Each process that wants to connect to a tcp/ip port on cloverleaf will get it's own port. Adding to the complexity is the fact that Cloverleaf has a production, test and training environment. The number of interfaces that we have will only grow with time. In order to manage this complexity we need the flexibility to assign port numbers on the cloverleaf servers in a logical manner that is maintainable and ensures that we can quickly and easily troubleshoot any networking problems.

Ports are only bound to particular socket on the server. So it is perfectly acceptable for a port to be used for one purpose on an application server and for an entirely different purpose on another server, such as the interface server. In fact, restricting the use of a port on a machine, for a service that the machine will never provide is counter productive. In just a few years we would run out of blocks of numbers that we are allowed to use. Such a network wide restriction on ports would not be enforceable and would not be maintainable.

We are perfectly willing to fully publish our entire port number specification as a network reachable document on the novell server.

Thursday, April 26, 2012

There
are thousands of mythologies that explain how life was created and got
onto the earth. Just about every group of people that ever lived made up
some explanation about the creation of life. Because their memory of
history only extended back to the memory of the oldest living person in
the tribe, they had no perspective on how the Earth had changed on
vastly longer time scales. Memory can only last as long as someone
remembers the story. And as anyone knows that has ever played a game of
telephone, the story grows in the telling. The big points stay similar,
but the details are filled in.These
creation stories were comforting for the people who shared them. But as
the dark ages ended Europeans began to chart the course of the stars in
the heavens, systematically dig into the earth, began to explore the
world on ships, and most importantly of all, began to methodically
record the information they were finding, and sharing this information
with others. The information they were finding didn't match the creation
myths to this point. Instead of the Earth being thousands of years old,
life was millions of years old, and then billions of years old. Instead
of the Earth being the center of the universe, the Earth circled around
the Sun, and it wasn't even the biggest planet.This
new evidence completely disproved all the ancient myths. The old ways
of thinking were so entrenched that anyone who disagreed with them was
stoned or burned to death. A new more systematic way of asking questions
about everything was developed called “The Scientific Method.” This
allowed people to ask questions about the universe and experiment to see
if the question was true or not. Instead of taking things on faith this
new method required people to repeat the experiment before they
accepted the theory as true. And new theories could be presented to
refine or replace older theories, without anyone being burned to death.Many
theories were proposed that explained bits and pieces of the theory of
life. One theory took precedence over all the rest. A man named Darwin
took a voyage on a ship and studied a group of islands with an amazing
diversity of life. He noticed many patterns and over the course of years
he worked on a book. Another man named Wallace independently came up
with nearly identical theories to Darwin. So Darwin was forced to
abandon work on his masterwork book and immediately publish a pamphlet
called, “On the Origin of Species.”It
took decades of experiments to prove this controversial work to be
correct. A combination of the study of fossils and looking at living
species. Over the years since then the theory has been refined, but
never disproven.The
way evolution works is that living populations have environmental
pressures. Their offspring will either be fit to survive in that
environment or not. Because there are limited resources the offspring
that can get the most resources and have the most offspring will have
the most descendants, crowding out the less fit. This pressure from
other species and your own species is a form of environmental pressure
as well.It
is important to realize that individuals in a population do not evolve.
The offspring of a pairing can have a lot of different combinations of
genes, which make them more or less fit. Sometimes there is an error
when the DNA is duplicated. Most of the time this causes the offspring
to be completely unfit. But the mutation can result in an individual
that is more fit for the environment it finds itself in.The
pressures of all the species in an area all evolving together is called
macro-evolution. When one species becomes more fit for the environment
it puts more pressure on the other species competing for the same
resources.A
new species forms when individuals from the same species stop breeding
together, either because of behavior changes, or because of geographic
separation. At some point in the future the small changes accumulate or
the number of genes changes and makes it impossible for the individuals
to breed together anymore.We
talked about theories about how life evolves, but where did life
originally come from? Nobody knows for sure, and it is impossible to
prove one way or another. The most popular theory is that the early
atmosphere rained out organic compounds into the early seas and ponds.
Then primitive replicating molecules began duplicating. Eventually a
cell wall was formed by mistake, this became the primary cell that out
competed all the existing self replicating molecules.This
cell replicated itself and mutations caused it to fit into every
available ecological niches. Cells began to invade other cells in
parasitic relationships. Eventually these relationships became symbiotic
and beneficial to both cells. This happened several times; once with
mitochondria, a second time with the nucleus replacing previous, and a
third time with chloroplasts. This increase in complexity allowed the
development of multi-cellular life. This multi-cellular life got washed
up onto land during high tide, became adapted for land bit by bit, and
then spread from the shore inland.If
you look at all the stellar systems in our galaxy, and all the galaxies
in the sky, the odds that life evolved on more than just one planet is
almost a certainty. Even if you say that only 1 in a million suns has a
planet in the right place, and 1 in a million of those planets develop
life, that still leaves millions of places that life could develop. And
life may just be completely adaptable so that it can form in conditions
well beyond what we expect, including in interstellar space far from any
sun. We have found life on Earth in ice fields, miles deep in the
earth, miles deep under the ocean around volcanic vents, in hot springs,
and even in nuclear reactors. It may be possible that the first life on
Earth came from the comets that rained down onto the forming planet.

The
difference between the wise man and the fool is not that the wise man
succeeds at everything on the first try. The difference is that the
wise man only sees someone fail that particular way one time.As
a computer programmer I was often called upon to code solutions to
problems that nobody at that company had dealt with before. Failure had
to be factored into every project, and a solution to each failure found
before the scheduled deadline. You couldn’t just look up the answer and
implemented it. You first had to figure out exactly what the question
was, because often the person asking for something wasn’t very specific
about what they wanted. This was called a functional specification and
is a negotiation between everything that someone could possibly want,
and what was technically feasible to implement in the limited time with
the limited resources available.Only
once you knew exactly what someone was asking for could you write up a
technical specification about how you would implement a solution given
the limitations of your computing environment. After 3-4 weeks you
would have a good idea of the question and a solution to the problem and
it was time to attempt to implement the solution.Often
this is where programmers like me would come into the project. We
would be handed the functional and technical specifications and told we
had 4 weeks to implement the code. We were told that this would be our
number one priority, along with the dozen other number one priorities we
also had at the same time. Often
I would bring up the project at team meetings and get input from
everyone there about how they would like the project to be implemented.
I would give my ideas on their projects as well. Once I had an idea of
how to implement the code I would break the code up into interfaces and
code each piece. If more than one programmer was working on the
project we would each work behind one of these interfaces and if we did
our piece correctly the code would just match up and run in a couple of
weeks.Now,
we programmers expect to write the code completely wrong, so at each
interface we would first spend a couple of hours writing the interface
and a test harness to test the code. The more time we put into this,
the more robust the interface would be later. Then
we would implement the code behind the interface, in a fill in the
blank way. It was a cycle. Write code, test it, see how complete and
functional it is, repeat. At this point it is common to run into
problems, figure out a solution, and have to go back and amend the
technical and functional specifications. Sometimes the project is made
longer to fix the problem. Sometimes the project is broken into a phase
I and II. Sometimes everything goes right and you make the deadline
with functional code that actually works.You
then go into testing. Quality assurance gets the code and the
specifications and tests everything. We developers would get bug
reports and fix the problems, rolling out new releases to test. At
the end of the project we always won, despite having numerous failures
along the way. Any system that doesn’t implement feedback to correct
failure is a broken system. Failure is part and parcel of life. The
only failure that counts is the failure that isn’t corrected.

Wednesday, April 25, 2012

One of the things I had to do with the netbook when I installed easy peasy ubuntu onto it, was to create a startup disk that ran from a USB port. Here is a tutorial on how to create a boot thumb drive of nearly any Linux install disks.

The reason I am doing this is to be able to camp and travel in comfort and style. If I get a short term job some place I want to be able to come in and work right away. I may also homestead a small plot of land with the trailer.

I wanted the trailer to be just wide enough so that a full sized futon would fit in the front of the van. It should have enough room to store clothing for 2 people, Have a shower, a toilet and a kitchen. It should provide enough power to heat the water, heat the interior, power a fridge, led interior lights, a small tv and a laptop computer. It should also be able to charge up a battery to run a drill or a cutter.

[gallery]

I had never built anything of this scale before. So everything was new to me.

At first I was aiming to build a 2000 pound trailer, but the cost for the 3500 pound parts were just a little more and even if my trailer only weighted 2000 pounds that would add a large safety margin into the equation.

I got 3 twelve foot and three eight foot sections of 2x2x1/4 angle iron. This was around $150. I only had three small pieces of waste metal when I was finished.

I laid out two of the 12 foot pieces as side rails and two of the eight foot pieces as end rails. I got 4 buckets all the same size to work on. The side rails went on the buckets. I clamped the end rails under the side rails. The axle was lifted onto the side rails and the width of the end rails adjusted to 61 inches on the outside, so that the spring mounts were just inside the frame.

Everthing was tack welded and then the frame was squared by measuring both diagonal directions. The frame was welded together by my step dad.

The front spring shackles were put on just past the mid point to lean some of the weight of the trailer onto the hitch. The rear spring shackle was put back 25 inches from the front shackle so that the swing arms just broke past the halfway point. The frame and shackles were drilled for grade 5 3/8 inch bolts.

The front A frame was welded to the receiver. It was positioned equally on each side so that it made two triangles on the front corner.

The frame was primered and painted with rustolium spray paint.

Top Deck.

My first plan was to build the deck on top of 2x4 framing inside the angle iron, but this puts the deck up 3 1/2 inches higher than it needs to be.

So the plan now is to build inside the angle iron frame and go under it with 2x4's in 3 places to bring the weight back onto the trailer frame.

The four steps of database design
are discovery phase, plan the tables, normalize, and test the
database using sample data.

Data duplication is entering two
or more records in the database about the same entity with a slight
variation. You have to delete or merge duplicate records by hand.
Data redundancy is the same data in the database repeatedly. You
remove this redundancy by normalizing the database. The reason you
want to try to only store data a minimum number of times is to
assist in keeping the data up to date and consistent.

Scope creep is when new features
and requirements are added to a project after the project has begun.

When you are assigning a data type
and size to a field you have to know the storage requirements for
that field. You need to know the size and range of data will be
stored in the field.

Text fields store 255 characters
and are used for a short collection of text, codes such as phone
numbers, email addresses, and zip codes . Memo fields store up to
1GB of data and display up to 64,000 characters and are used for
formated text, and to accumulate logs in append mode.

Currency fields are number fields
with a currency sign in front of them. Currency fields also default
to 2 decimal places.

You should never store calculated
fields in your database. For instance, if you know someone is 32
years old, you don't store their age, you store their date of birth.
That way you can generate a report now or 5 years from now that
includes the age of that person and it will be correct in both cases
without having to go in to the database annually and recalculate
that persons age.

When you have a field in a form
that has a small set of non changing possible values you create a
drop down box to allow people to quickly set the value in that
field. This prevents things like someone entering a state name
that doesn't exist. You could have drop downs for ice cream at an
ice cream shop or for picking the name of someone to assign a bug to
in a bug tracking database.

What are the three general rules
about naming objects in a database? Names have to be less than 64
characters, but should be less than this. Names cannot include a
period, exclamation point, accent grave, or brackets. Names cannot
include spaces.

The four main database objects
are Tables, Queries, Forms, and Reports. Tables hold data, queries
ask questions about the data, forms allow you to enter and display
data and to act as a switchboard to your program, and reports allow
you to retrieve and format data from the database in an attractive
way.

Select, Action and Crosstab
queries are the three types. Select queries ask questions about the
data in the tables in the database and display a dataset. Action
queries change the data in the database. Crosstab queries calculate
data from a table and display it in a dataset.

Redundant data entry, Error prone
and difficult to update when information changes. If the
information does change and you don't change all the occurrences
then you have introduced data inconsistencies to the data.

A primary key is a field or set of
fields that uniquely identifies a record. If this key is included
in another table it is called a foreign key. By linking tables
together this way you create a relation between the tables that
links the records in one table to the records in another table.

A is one to many. Any one
customer will have one or more orders. B is one to one. States
only have a single capital. C is many to many. There are many
college students in a class and each of the students take multiple
classes.

Entity integrity is done by using
a primary key, which requires that there be only one record in the
database with that key and that key is not null. Referential
integrity is when the value in the foreign keys in a table match the
table where they are primary keys. Yes, you should enforce
referential integrity in a database so that you don't get records
with values that don't relate to the other tables correctly.

Deletion anomalies occur when you
delete a record and the cascading delete removes data from related
tables because that was the records last matching record set.
Update anomalies occur when there is duplicate data in the database
and an update only changes some of the data. Insertion anomalies
occur when you can't insert a record into a table unless you enter
data into another table first.

Normalizing databases result in a
smaller database, reduces the occurrence of inconsistencies and
reduces the occurrence of all three types of anomalies. The first
normal form reduces repeating groups. The second normal form
removes functional dependencies. The third normal form requires
that every field in a record be at least partially determinant on
the key.

A determinant is a field, or set
of fields whose value determines the value in another field. A
partial determinant is where the value depends on a subset of a key.
A transitive dependency is related to another field that has a
partial or full determinant dependency.

Tuesday, April 24, 2012

5‐Paragraph Essay

This document gives a general overview of the 5 paragraph essay. It is as brief as possible and should be used in conjunction with several specific good and bad examples in order to teach how to use this tool to improve writing techniques.

Outline

Introduction (with thesis statement)

Body Paragraph #1 (with topic sentence)

Body Paragraph #2 (with topic sentence)

Body Paragraph #3 (with topic sentence)

Conclusion

Introduction

Is the first paragraph of your essay.

Introduces your topic to your reader.

Tells the reader exactly what the rest of the essay is about.

Concludes with a clear, strong thesis statement.

Body Paragraph #1

Open with first topic sentence.

Corresponds to the first point in the essay map.

Body Paragraph #2

Open with second topic sentence.

Corresponds to the second point in the essay map.

Body Paragraph #3

Open with third topic sentence.

Corresponds to the third point in the essay map.

Conclusion

In your conclusion, reflect on the main points you made in the paper.

Highlight the most important information.

Do not introduce new points.

Do not simply re‐state your thesis statement and/or the main points from the essay.

Leave your reader with something interesting to think about.

What is a Thesis Statement?

A single, clear, concise sentence.

The final sentence of the introduction.

Contains the topic of your essay, and your opinion on the topic.

It often includes an “essay map” that lists the three main points you plan to make in the paper.

What is a Topic Sentence?

A topic sentence is a
single sentence at the beginning of a paragraph that tells your reader
what the paragraph is going to be about.

A topic sentence is similar to the thesis statement, but it works only on the
paragraph‐level, whereas the thesis statement covers the whole essay.

Each topic sentence should directly reflect one of the points made in the thesis statement.

Body Paragraph

Will focus on a single idea, reason, or example
that supports your thesis.

Discuss only one point per body paragraph.

Begins with a clear topic
sentence (a mini thesis that states the main idea of the paragraph)

Has as much discussion or explanation as is necessary to explain the point.

Use details and specific examples to make your ideas
clear and convincing

Five lines minimum per paragraph.

Transitions

Connect your paragraphs to one another, especially the
main body ones.

Do not jump from one idea to the
next.

You need a
transition between each paragraph.

Use the end of one paragraph and/or the beginning of
the next to show the relationship between the two ideas.

Think about words and phrases that compare and contrast.

Does first tell us a pro and the second a con? ("on the other hand . . .")

Does second tell us something of greater significance? ("more importantly . . .")

I wrote this up a few years ago, thinking about how to implement a way to program complex systems. Instead of writing programs like previous programming languages you create assembly lines of objects through which the data flows as messages. This turns everything around. Multit-hreading, multi-processor and cloud processing should be able to be introduced at the system level, without adding any complexity to the "programs" that people have already written.

--

The application programming framework is a general framework to manage the memory and messaging between objects.

Everything in the framework is an object. From a simple number to the most complex protocol is an object. TCP is an object, file is an object.

Let us say that one wanted to create a web server.

One would create a work unit object to contain everything. A view, if you will.

Inside the view you create a TCP object, a stream to http object, and an http processor object. Everything would be configured at this point. Then connect the inputs and outputs between the objects together.

The starting and stopping actions of the work unit are defined.

Finally the objects would be started from the http processor to the stream to tcp object, and finally the tcp port would be started.

After everything is started then the tcp port accepts connections, this generates a connect message with a session Id that is sent to the stream to http object that allows it to set up a data structure in expectation of more to come.

Any data that comes in on session is sent to the stream to http object as a stream object with a session id embedded in it. The http data object generated is associated with the session id as well. The http data object is sent to the http processor. The http processor queues up all the http requests.

When finished and the work unit is deactivated, the tcp object stops listening for new connections, the queue is flushed and all current work is finished, then the http processor is disabled, the stream to http object is deactivated, and finally the tcp object is deactivated. After a timeout even if a work unit isn't finished, then everything is shut down anyway.

The key here is that it should be a very simple script to create, configure and connect these objects together and then to control their startup and shutdown behavior. With the proper base objects it should be easy to create a secure web service or an rss feed.

-- -- --

There are multiple levels.

The memory.

The objects.

-- -- --

The first Object must be hand crafted, because there are no facilities to create objects until the object class exists.

Once the Object class is present then you can subclass from the Object to create new classes.

I want to be able to easily swap out any subclass of the same type, so that you can easily change out a file for a tcp connection to make it easy to test processing from a file.

-- -- --

Versioning

If you just declare to this point, then you will get the highest version of anything

Let us say that we have tcp.

inside that object path is version info.

/grokthink/stream/tcp/1/0/0
/1/0
/1
/5/0
/6/3
/2/0/0
/1/0
/2/0
/1

It goes /major/minor/build

If you don't specify a version, then you get the highest version of the same major number you used when you built something. You can specify just a major number, or a major and minor number or a complete path to a specific version.

-- -- --

Installing versions.

"GrokThink" is the name for our company. The URL to the GrokThink object repository is stored as a property in the GrokThink Object.

An interface is designed to check this repository for new versions and ask if the user wants to install them.

Build number changes are for internal use, to differentiate between versions for QA. Minor number upgrades are controlled by QA, usually once a bug fix is done, then the build number is set to 0 and the build number is changed to incremented. Typically these changes are minor bug fixes or a feature add that will not change the behavior of previous functionality. The key here is that the fix or feature add should not change the behavior of objects previously used to build prior services or applications.

Major number changes are made for incompatible changes to an object. Often you will change the interface or fix a bug in such a way as to no longer work with older objects. At this time the major number should be incremented and the minor and build numbers set to 0.

Development goes like this:
1. Developer gets a bug against a specific major version.
2. The bug report includes a new unit test to demo the bug.
3. Developer checks out the source, this increments the build number.
This also locks the object so nobody else can work on it.
4. The Developer fixes the bug.
5. The Developer runs the unit tests against the object, adding in the new bug unit test.
6. Once all the tests pass, the Developer can check in the fix to the code system.
A description of what was fixed should be attached.
7. This creates a diff from the old code, the new code and diff go into the system for another developer to double check.
8. The developer approves or disapproves the fix. The two developers negotiate a fix and finally the second developer approves the fix after the changes are made.
9. The final fixed release is sent to QA.
10. QA approves the release and the release is put into the published area of the site.

11. The end users versions of the software can now download and install the new object.

The Development system comes with the software to perform all of these actions and publish to a public web server for your own company. The company name is the name of the web server that the repository resides on. This will prevent any conflicts as time goes on between different companies. There can be a redirect added to override the repository location if the website name where the repository resides changes in the future.

What all these articles are failing to realize is that at todays launch
costs _anything_ already in orbit is worth several times it's weight in
gold, minimum. Get 60 tons of water from an asteroid, it's worth a cool
million dollars. Get 60 tons of any resource from an asteroid, it's
worth a cool million dollars.

Nobody is going to bring anything back to
earth, that would be ridiculous. You'd be better off filtering elements
out of salt water.

Mining asteroids is so we can colonize the solar
system cheaper.

The only thing that could compete with mining asteroids economically
is a space elevator, which would have to be made out of unobtanium and
which would attract terror attacks like moths to a flame.