Published notes are attached to this post. Below is the notes that eventually became the presentation.

so the idea is to talk about mongo db in an implementation prepared to log data that doesn't necessarily have a schema. So json is a standard that defines data at a state in time. A sample user of a forum example.

Honestly I see the memberships embedded document being deeper but this is for a sample, and I'm trying to keep this simple. In use in engine management this could be used in combination with the network of sensors standard known as CAN. It provides banks of sensors so it would look something like this:

Code:

{p_0: 15.0p_1: 0p_5: duh is this getting boring?}

that could be dynamically appended a t value that defines where in the sequence of the logging and that value would be indexed.

perfect place to put un implemented features so if at a later date we have data that is expected to be 0. we can filter that out easily later. so _max and _avg can be filtered until the register stars responding like an O2 sensor.

So the engine is at the mercy of the ecu until the feedback from the o2 sensor can keep the engine running. Before that time, the ecu needs to deal with throttle position, air bypass injection and spark.

voltage of the sensor ranges from 100 to 900 mV. and the current ECU will throw a trouble code if the range remains below 300 or above 800.

so it takes event handlers like t_value = now - 150 _avgif we need to evaluate if a sensor has been offline for 150 cycles we need listenerssmall mongodb instances that are self garbage collecting using capped collectionsoptimized data evaluation

This same capped collection can be used to determine a sensor is out of range.

Since we don't know many documents per second that we are going to receive we set the capped collection to be tuned on the least number that can be metriced when saving only the single sensor to the database. Since documents performance are based on size of the document, the best performant implementation can be set as a value based on the test.

You can then gauge your performance when you do a query on the t value for the time range the test was for. If our isolated test revealed maximized performance tooks 3000 documents and in implementation there were 2000 documents returned from

use it as an extaction layor for an existing SQL instance by issuing sql commands and interpreting the results into a JSON log output and output to targit {name to be determined, I know it starts with the letter x and released in a later version of their bi tool; the whole reason I'm implementing it in a data center with the upgraded release version so all the new features unlock as part of the migration] which is designed to show multi dimensional views of predefined unputs.

In troubleshooting large sql tables I often start by query with a

Code:

SELECT TOP 1 * from PBAInstanceTable

which returns column names of the table that is used to define a product model which has configuration options for a product like Kent which can be seen in the configuration engine.

Options like vaulted, or finish, length, shade are records in sql using referencial number. I would want another SQL statement to return the collection of these configurations. By the way it will return pre and post configuration engine variables so you will actually need to filter on that as well if you want to return a single configuration detail set.

The state is the addition of statemachine which allows the document processing state to follow the document. It allows built in error handling that happens within the document itself. MongoDB is a great thing to add features like statemachine for some documents, and a capped collection so those that were old fall off and those that were in error which have likely been fixed by the time the error drops off makes for efficient troubleshooting with built in analytics.

For example your state machine might be inspecting the model for the product and validating that all of the required config variables exist and if it detects an anomoly against the model it can set the state to

now lets say we want to tie in a shutter from an external web cam.we need to tie in an embedded document within the logger with the image captured from the camera so it gets tied to the sensors

BSON id begins with the timestamp so just by logging data we get the timestamp and we can log any type of data, since the nature of binary storage is to save raw data. GridFS is also available if you intend to save very large dumps. GridFS would be applicable if you wanted to locally save images of computers under version control. Replication would come in handy but these machines should have a network control task to limit their bandwith, monitored, reported.

binary data storage makes storing any types of documents. UTF-8 seems to work well for me and file storage, and sensor data are all well optimized

The problem comes in with the overhead of a large database engine on top of small memory footprints. Transactional data being tied directly to point in time better suited application. User registrations, orders, inventory with relationships are far better suited. Mongo facilitates this through doc ref

oscilllation and synchthe per second signal is ignored, it is just log data. It may come in faster or slower depending on CPU utilization

basing distributed storage on universal cycle count of an engine rather than per second or per transaction.

In distributed environments dedicated database quickly becomes off loaded. development works best with a local instance. Mongo is easy to set up and works in Linux and mac very easily installation by environment

allows abstraction of bi tools to analyze pulse of signal that can be tied to signals that include voltages of every sensor providing feedback

Analyzing sensors with direct access to query the signals at different intervals within the stroke will allow tuning control to tie back into the forecasted tables. It is the only way to reliably test.

So here is where things get really interesting. We no longer need an engine mangement system. The car is autonomous. The injectors sense that the key is turned since they received their wake up 1x cam signal. They know to fire and they will begin to do so.

Spark is already calculated at the GM DIS interface this is really the basis of the project. This is where a 6 and 1 7x signal is translated to 3x which will inform the system of the 3x signal and the injectors will self align since they have their own computer chip in line with the harness that receives the CAN signal.This cpu is responsible for sending the MAP and CHT back to the system.

Who is online

Users browsing this forum: No registered users and 1 guest

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum