Author: josefB

The Earth’s surface in sunlight in ideal conditions gets about 255 x 1011 W/m2. More than enough to power multiple countries. But, when that side of the earth is pointing away from the Sun, the energy drops to zero. To depend on solar power, there must be a substitute or storage option for that loss.

Solution
There must be a Planetary Power Grid. The active surface of the planet must supply a percentage of its power generation to the opposite side. The science and engineering of power grids are well understood. They just have to be bigger and more efficient, such as using superconducting cables, space based solar power, and microwave power relay stations.

This power grid will supply power to where it is most needed. A portion of the available energy is used to supply energy storage facilities in regions that have reduced economic activity due to the day/night cycle.

To be feasible, more sunlight capture and higher efficient voltaic generators are required. Until that can be accomplished: fossil, nuclear, wind, and other supplies will make up the difference.

This is huge task and a host of hurdles like technological, political, economic, and logistics. The cost is in the trillions. But, if the tipping point of climate change is approaching, nickle-and-dime approaches are not realistic. This really does requires a Green New Deal but for multiple countries to take part.

Estimated completion for total globe: 2053.

Increasing the irradiated solar cell area
Solar farms should be in the ocean. These are a mix of fixed offshore solar farms and floating Sun tracking systems.

When they are in the ocean the water could also be used for cooling and cleaning of the solar arrays. But this location must be engineered to have minimal negative ecological effects. Note that marine life does depend on solar energy. Blocking of sunlight would upset fragile oceanic ecologic systems and any kind of tower could impact bird migration patterns.

Other renewable
This same “MegaGrid” could be used for distribution of other energy sources like wind turbine, tidal, etc.

My calculation.
It’s just a simple multiplication of surface area by the energy density.
The surface area of the earth illuminated by the sun is approx 255,000,000 km2. That is just half of the total spherical area. The Sun’s energy at ground level is 1,000 W/m2.
So, that is 255,000,000,000 kW/m2 potentially available energy. Or, 255×10(11)W/m2.

Abstract

There are some things a FSM cannot handle[5]. The author attempts addresses this by adding memory-based States connected to logic “gate” junction pseudo states. This enable Statecharts to handle more complex applications while still remaining a transition network at its core.

Josef Betancourt, November 27, 2018

1. Background

Hierarchical Finite State Machines (HFSM), especially the Statechart (UML state machine), are a useful tool in increasingly complex applications like Graphical User Interfaces. They allow the logical behavior flow requirements to be expressed in a declarative model.

The state of the art, Statecharts, are defined in diagrammatic terms by David Harel as:
“statecharts = state-diagrams + depth + orthogonality + broadcast-communication”. [1]

In real world use, various frameworks and libraries such as the SCXML, provide additional constraints, and Executable Content[4] to that declarative state transition network. These other feature are actually executing an imperative flow of control. So it could be argued that the state graph does not really specify the system in a declarative model.

Why isn’t the state graph enough? While StateCharts do limit the combinatorial state explosion that are a drawback of traditional FSM, they export some of the complexity to a side channel or dimension of the Statechart run-to-completion behavior. These are ‘extended state machines’[2].

2. Example

A GUI form presents a list of fields that must be completed or activated by the user. There is also a Done button that the user clicks to signal that the form is ready to be submitted.

Requirement: All fields must be activated by the user. When Done is clicked, an error message is shown if all field have not been activated, else the form is submitted. This is a common ‘validation’ scenario in form processing. The fields, for example, could be checkboxes. The Done button invokes a validation process to determine if the form is ready for submission.

A naive Statechart of this scenario would make each field and the Done button states. Each field could even be a concurrent state (in an orthogonal region) of the form. But, when the Done button is clicked and the form must transition to a Validating state, how is the state of all the buttons used to determine the transition to error or submit? How is the ‘all fields were checked’ captured in a Statechart?

A programmatic solution is just to transition to a validation state that determines if all fields were activated. In a Statechart this complexity can be handled by an invoked program/script that evaluates a transition guard to a target state: error or submission. This guard uses the extended state to allow this determination.

Note:

This is just a simple example. That it could be done with only present Statecharts is not the issue. It is probably not a perfect example for this topic.

The UML statechart standard, for example, has a richer set of states for this purpose. Pseudo States such as: join, fork, junction, choice, and exitPoint.

David Khourshid, the developer of XState, responded to this example at https://github.com/davidkpiano/xstate/issues/261#issuecomment-442451581

“Have you seen the semantics of <final> states (as in SCXML)? https://xstate.js.org/docs/guides/final.html
We can model “all fields must be valid” by setting 'valid' as the “final” state for each orthogonal node and listening for the done.state.* event.”

And then gave a code example.

3. Synchronous AND transitions

Petri Net approach

The example scenario could be trivially solved using a simple Petri Net model. Each form field is a ‘place’ and when it is activated, a ‘mark’ or token is placed there. All the places connect to a ‘Transition’ that fires when all the input places have a mark. This fired transition puts a mark into the Valid place.

State Memory

Borrowing from Petri Nets, we can use marks by adding memory to a state. This memory allows the synchronous use of prior state transitions. History states are already one type of memory used in Statechart standard. By generalizing the use of memory, more complex systems can be modeled.

Why ‘synchronous’? A machine can have one current state. In a Statechart, nested states form an XOR. To handle complexity, Statecharts also provide an AND in the form of orthogonal substates. These are called ‘parallel’ in the XState implementation [6]. But, they could loosely be termed asynchronous, since their behavior could be independent of the superstate container.

Definition

State memory is a numeric limit value attribute used as a modulus. The current value dynamically changes. The limits are:

n=-1: latch,
n=0: no memory (default),
n=1: toggle,
n=>1: roller

The values allow the state to be repeatably reused in operational in reactive state changes.

4. Operation

To provide a transition via an AND, a state must have a non-zero memory limit. If the machine re-transitions to the same state, the memory value determines when the memory is cleared. The preset memory size value is the increment modular rollover. Whenever the state is entered, the current value is increment. When the value reaches the memory size, the current value is set to zero.

A size of zero, turns off memory. A negative size, causes a state transition to a state to always have a memory of that occurrence, no incrementing. A value of one clears the memory on each transition, so acts as a toggle. When size is greater than one, subsequent transitions to the state will begin to increment the memory until the value of the memory size setting.

When all the memory states have a current value the AND fires, and the machine transitions to the target state of the AND. Unlike a Petri Net, a fired AND also resets all the input state’s current memory to zero. In practice an implementation would make this behavior also configurable per each AND relation.

States with memory are ‘normal’ states in a Statechart. Their ‘memory’ aspect only affects behavior when used with an AND pseudostate.

5. Symbol in visual diagram

The simplest realization in visual form is a Petri Net solid bar |. A bar is also used in UML activity diagrams to indicate a fork/join. Since the bar already has so many connotations, an actual military style AND circuit symbol or a European style & rectangle could be used. The simplest thing that would work is a solid bar: |.

6. Example solution using Statechart memory

With the aforementioned approach the Petri Net equivalent solution is shown below. Note that this diagram is not showing how to formally add memory and gates to the standard Statechart visual notation.

7. Generalization to Gates

Adding memory to a state and its use to create an AND pseudostate leads to the possibility of other types of pseudostates to tackle fine-grained complexity and details. Since these are more like logic ‘gates’, calling these “gates” seems appropriate.

With the addition of OR (||) and NOT (o), many other gates could be created, such as NAND, NOR, and so forth. This is similar to the many types of gates and their combinations that capture hardware logic and specified using Hardware Description Languages (HDL)[3].

8. Conclusion

The addition of state memory that connect to gates enables creation of more complex and detailed Statechart or HFSM models. More work is required to evaluate this approach and modify or abandon.

In a way this effort bridges the gap from high level software back to hardware. It may be possible to use Computer Aided Design (CAD) technologies in this realm, especially if extendable Statecharts could be used as component building blocks.

Based on my quick review, it seems a company should add GraphQL to their tech stack. It’s not an either/or situation.

REST
Representational State Transfer (REST) is the architecture underlying the WWW. It scales. The REST concept is also be used to create RESTful services as an alternative to older technologies such as Remote Procedure Calls (RPC) and SOAP. REST architecture scales.

REST has some limitations and critiques. For example, few REST APIs are truly “RESTful”. The level of REST is captured in the Richardson Maturity Model (RMM). The highest level requires use of HATEOAS, see “Designing a A True REST State Machine”.

“A very strong argument could be made that if most APIs are REST-ish instead of REST-ful, and assuming that most of the conventions that we’re actually using boil down to making URLs consistent and basic CRUD, then just maybe REST really isn’t buying us all that much.” — @brandur

GraphQL
GraphQL was created to fill a hole in modern web API. GraphQL unlike common REST queries, describes the ‘shape’ of data to retrieve from a single end-point. The structure of that data at the end point is a black box. The host knows how to fill in the shape to create the response. Kind of like SQL (in a very rough kind of comparison): In SQL the data has a shape, the Relational Model, and the single end-point queries, declaratively describe what to get from that shape. In REST when you ask for a cup you get the kitchen sink and all the cabinets.

“GraphQL is a declarative data fetching specification and query language for APIs. It was created by Facebook back in 2012 to power their mobile applications. It is meant to provide a common interface between the client and the server for data fetching and manipulations. GraphQL was open sourced by Facebook in 2015.” — GraphQL vs REST

The single endpoint is a critical distinction. To get distributed, consolidated, or nested data a REST endpoint could, of course, invoke an integration service on the backend, or use techniques such as API Chaining. in GraphQL the “integration” exists semantically on the client query. The declarative query just describes the result and the server provides it. And since the shape determines the data desired at the single endpoint, the over/under fetching of REST is avoided. The query is ‘resolved’ on the server which may invoke of existing multiple SQL, REST services, MQ, or other services. This affords natural growth of the API. In contrast a REST API grows via more endpoints and/or addition of query parameters (which could morph to disguised RPC).

If there are no physical constraints could a program look up it’s solution by firing off a set of queries into a solution space?

While taking a shower this morning I was pondering about some things I read recently.

First some history. Programs were created to run in constrained resources. As such, a program is just collection of small bits of data transforms. Yes, more complicated than this, don’t interrupt.

What if there were no constraints? In future we will have more memory and network speed will be much better.

With no constraints we could map inputs into outputs directly, no computations. A program becomes a query into possible solution spaces. A functionless approach.

This is already the case with database technology. For example, relational databases use a Relational Algebraic basis for declarative programming. In some data centers a whole database of terabyte size fit in memory.

What is required is a mathematical basis for non-constraint programming. Here is my intuitive notion. A program is a dot in a vast tensor state space. When a user clicks on a button, that dot moves to a different subspace and all it’s queries are evaluated in with new basis vectors. When the dots are related to other dots, we have a system.

Since a program is just a bunch of queries from an “instance” to a ‘space’, it should be possible to use big data techniques (“AI”) to generate that space. This would make it possible to automatically generate applications based on a transition graph of how objects move through space.

Well, I have no time or expertise to make the above work. It was probably some chemical in the soap. Doesn’t seem possible to create such a space.

Solar powered landscape lighting is an inexpensive way to add a simple accent to any ground area.

I bought a box of 12 lights and the cost was about 80 cents per unit. I didn’t have time to put them where I wanted. I just unpacked them and put them in a hanging basket on the deck.

Just a bunch of solar lights

Wow, I accidentally created a beautiful effect. In a very dark night JBOSL (Just of Bunch of Solar Lights) gives a soft eerie accent light.

They don’t even have to be in direct sunlight during the day. You don’t want bright lights, just a glow. Of course, if they don’t charge enough the glow won’t last long.

There are actual hanging solar powered light products out there, but for 80 cents a light and a cheap basket, you could add accents all over the place. Not too much, or you’ll be visited and have a close encounter of the wrong kind.

Caveats
Solar lights are not very reliable. First they are cheaply made and I think quality control is weak. If you search web, you’ll find constant complaints that a percentage of solar lights received don’t even work.
What I find the most painful is that the batteries used in these things are “non-standard”; so hard to replace. Yup, rechargeable batteries wear out.

I was standing at the window watching the recycle trunk pick up our container. The trunk was driven by one person who just operated a giant robotic gripper that extended from the truck to hold the container and empty it. Took about five seconds. Up and down our street the truck zoomed. All done. Amazing. Before, there were a few (strong!) people who did this. It was one driver and two other workers. Where are they now?

Currently, only recycle bins in my city are standardized. So, a truck with a single manipulator is usable. In future, its inevitable that the rest of the garbage will be disposed in standard containers, and more people will do less of this manual labor. Nothing new, the march of automation and more powerful computers. For decades there have been those warning of dire consequences, yet we are still here and have more of more stuff. So what is the problem?

There is a confluence of technological breakthroughs that are bound to happen and all these are being quickened by the internet. Internet speed change is occurring. Robotics breakthroughs and applications are only waiting for a new form of power storage and control.

Currently, all advanced technologies are limited. Drones have limited range and robots are mostly lumbering beasts that fall over doing the easiest tasks that humans can do while sleep walking. Yet, this won’t always be the case. In fact, the substitution of humans with autonomous intelligent systems is not even the only way this Change will happen. It could be that the path will just be human augmentation.

People can be augmented to use more AI and robotics technology. This will increase productivity in many fields. Thus, we’ll need less people. Even now we see the effect of more information in knowledge industries. Years ago a software developer had to remember many things, but now with the web the admin-techno-config-trivia is available at a mouse click. The bell curve of who can program has shifted. Since knowledge is available, silos of expertise are reduced. Now one person can do many different things. Again we’ll need less people.

If we need less people, how will the economic system function? Does it become a giant welfare state where only a few do “meaningful” work and the others have a form of guaranteed sustenance? Or will it become a dystopian nightmare of a stratified classes, the majority are the new ‘untouchables’, the middle tier occupied by the technocratic knowledge workers, and all presided over by the upper 1% ruling class?

This is a very simple approach using matchMedia support. We store media queries and a listener function as an object in an array. When the media query returns true the listener function will set the relevant component state true.

In our actual project, this approach is used with Redux. The queries instead of setting component state, update the Store. Containers connected to this store send the breakPoint, orientation, and other props to each child component.

Just an experiment. If pursued further, would probably make it a HOC (Higher Order Component).

Created a web app to generate a report from the version control repository, Apache Subversion™. Similar approach is possible targeting a different repository, like Git.

TL;DR
Someone said a process we follow could not be automated. I took that as a challenge and created a proof of concept (POC) tool.

The final GUI using ReactJS is relatively complex: five data tables that hide/show/expand/collapse. Searches on those tables, sorting, navigation links, help page, Ajax requests to access Subversion repo data, export to CSV, report generation, and client side error handling. It started as just a GUI to a report, but since it was easy, added more features: Zawinski’s law.

To top it off, the app had to automatically invoke the default workflow or no one would use it.

Result?
1. It is a complex disaster that works. And works surprisingly fast. Using ReactJS and Flux made it into a fast elegant (?) disaster that works … kind of.
2. The app served as an example of a SPA in the our dev group. But, mostly to try out the ReactiveJS approach.
3. My gut feel is that there are some fundamental problems in the client side MV* approach which leads to control flow spaghetti (a future blog post).

Since the time I wrote that app I have noticed a mild push back on React that point out the hidden complexities. There are now new ideas and frameworks, like Redux or Cycle.js. Most recently, to tackle the action-coordination, there is much digital ink written on Redux Sagas, for example: “Managing Side Effects In React + Redux Using Sagas“.

Note, though there are critiques of the ReactJS approach or implementation, this does not imply that React was not a real breakthrough in front end development.

Report generation
Creating simple reports from a version control repository can be accomplished with command line tools or querying XML output from SVN log commands. In this case generating the criteria to generate the report was the hard part. Details are not relevant here: this web app would remove a lot of manual bookkeeping tasks that our group currently must follow due to a lot of branch merging and use of reports for error tracking, verification, and traceability. Yup, long ago legacy Standard Operating Procedures (SOP) of an IT shop.

ArchitectureServer
A simple Java web app was created and deployed to a Tomcat server. A Java Servlet was used at the server to receive and send JSON data to the browser based client. This server communicates with the version control repository server.

Client
The browser is the client container with ReactJS as the View layer and Flux (implemented in the McFly library) as the framework client implementation. Dojo was used as the JavaScript tools library. Dojo supplied the Promise and other cross-browser capabilities. Why Dojo? That is already in use here. If we were using JQuery, that is what I would use.

Local application service
Performance note: Since the repo query and processing occurs at the server, multiple developers accessing the service would have a performance impact. A future effort is to deploy this as an runnable Jar application (Spring Boot?) that starts an embedded app server, like Tomcat or Jetty, at the developer’s workstation. The browser would still be used as the client.

Repository Query
Some options to generate SVN reports:

1. Use a high level library to access SVN information.
2. Export SVN info to a database, SQL or NoSQL.
3. Use an OS or commercial SVN report generator.
4. Use command line XML output option to create a navigational document object model (DOM)
5. Use SVN command line to capture log output, and apply a pipeline of Linux utilities.

This was a ‘skunkworks’ project to determine if some automation of a manual process could be done and most importantly, if doable, would the resulting tool be used? The first option, seemed easiest, and was chosen. The repo was accessed with the SvnKit Java library. (For Java access to a Git repo, JGit is available).

The process approach was to generate and traverse a Log collection. A simple rule engine was executed (just a series of nested conditionals) to determine what to add from the associated Revision objects.

This seemed like a workable idea until a requirement was requested after the POC was completed: instead of listing a particular source file once per report, show multiple times per each developer who made a commit to it. An easy change if this were implemented as an SVN log query sent to a pipe of scripts. However, with the design this required going into the nuts and bolts of the “rule engine” to add support for filtering, and further changes to the model.

Yup, a POC solution can be a big ball of mud, and unfortunately can be around a long time. Incidentally, this happened with Jenkins CI; where I …

Very recently a flaw in the design will force a revisit of the algorithm again. Instead of making the ‘rule engine’ more powerful, an alternative approach is to start from a Diff collection. The diff result would be used to navigate the Log collection. A similar approach is shown here: http://www.wandisco.com/svnforum/forum/opensource-subversion-forums/general-setup-and-troubleshooting/6238-svn-log-without-mergeinfo-changes?p=36934#post36934

But, already a problem was found with diff output. There is no command line or Java library support for pruning of deleted folders. For example, if a/b/c is a hierarchy, and you delete b, c is also deleted. Now if you generate a diff, the output would contain delete entries for: a/b and a/b/c. What was needed was just a/b. Simple, you say. Sure, but this information is a OOP object graph, so can be complicated.

Perhaps revisit the alternative approaches, like export to database? Not sure if this really would have simplified things, but instead just change where the complexity was located. Is the complexity of a software solution a constant?

Other systems take this export approach. One system I saw years ago, exports the version control history (it was CVS) into an external SQL database and then used queries to provide required views.

Client Single-Page Application
What to use as the browser client technology? From past experience, I did not want go down the path of using event handlers all over the place and complex imperative DOM updates.

Anyway, React seemed interesting and had a shorter learning curve. I looked at Angular, but it seemed to be the epitome of embedding the developer into the product (future blog post on the application development self-deception).

A few ReactJS components were created:

BranchSelect

CommentLines

ControlPanel

DiffTable

ErrorPanel

ExclusionRow

ExclusionTable

FilesRow

FilesTable

ManifestRow

ManifestTable

ProgramInfo

ProjectPanel

RevisionRow

RevisionTable

ViewController

Lessons Learned
This project progressed very quickly. React seemed very easy. But, that was only temporary. Until you understand a library or a paradigm, start with a smaller application. Really understand it. Of course, these too can fool you. For example, when this app first loads, I had to invoke the most likely use-case. There was a endless challenge of chicken/egg model flow disasters. Solved it, but can’t understand how I did it. Somehow I tamed the React flow callbacks. Or this is just a lull and will blow up in as a result of an unforeseen user interaction.

Next SPA project?
My next app will probably use Redux as the Flux framework. Or may leave Reactjs and go directly with Cycle.js which is looking very good in terms of ‘principles’ or conceptual flow and is truly Reactive, based on a ReactiveX library: RxJS.

Note that the “chaining” or ‘thening’ used here is not quite what chaining was meant for. The flatMap operator use in listing 1 passes the current counter, but the chained Observable does not use it, the Observable just repeats its onNext(…) invocations. The flatMap: “transform the items emitted by an Observable into Observables, then flatten the emissions from those into a single Observable”

The strange thing is the “Completed” output. The code does this because of line 14 in the source (does it?). The helloStream invokes onCompleted, but the completed in the subscriber is not triggered until the final counterStream event. Or, I’m looking at this incorrectly?

Example 2
In example 1 above, the function that operates on each item passed by the source Observable is not used. I’m wondering if it could be used in a “chained” in Observable, as in listing 2 below. Does this make sense? Doesn’t this then have a runtime penalty since the Observable is not created before it is used?

You can attach an external hard drive to a Raspberry Pi and then share music over Sonos. This works very well. Even though my hard drive is connected to the rPI via USB 2.0, the music streams fine, no stutters.

Right now I’m playing Jeff Buckley’s ‘Sketches for My Sweetheart The Drunk’ all over the house. “Vancouver” track is so awesome!

Technically this kind of storage sharing falls under the term Network Attached Storage (NAS). But, that seems like an overblown term for just sharing one disk. There are a lot of features on a full-blown NAS.

How does the RaspberryPI share the storage? By running a server called Samba. This is a set of open-source programs that run in Unix/Linux to provide file and print services compatible with Windows-based clients.

Spin down?
Currently I’m looking into how to enable spin down of the hard drive when idle. Necessary? Supposed to make HD last longer. I just want to reduce power usage. The whole point of a Raspberry Pi in this scenario.

Maybe this page, “Spin Down and Manage Hard Drive Power on Raspberry Pi”, will help.

July 5, 2018
My RaspberryPI died. Flash card had some issue. Spent a lot of time recreating my configuration again. The articles I link to on this post did not help much except for the one at “Retro Resolution“.

Technical details
I had a lot of grief getting it to work. Haven’t touched a Linux system in while.

Some articles of the many articles I found information on how to do this are in the links section below. Note that there isn’t one single approach to do this. And, it also depends on what OS your running on Raspberry PI. I’m running Raspbian which I installed via NOOBS; all included in the kit I purchased.