Mon, 17 Dec 2018 21:04:14 -0800WeeblyTue, 11 Dec 2018 16:57:08 GMThttp://www.marinemagnet.com/status-updates/top-10-virtual-reality-training-simulation-application-to-field-activity-scenario-prototypesThe future of Marine aviation is complex: aircraft are growing more technologically advanced, pilots face a proliferation of high-end and low-end threats, military budgets are squeezed and demand for Navy forces around the globe is growing.

So how will Marine aviation training keep up? In part, with fielding of tech advanced simulators.Joint Terminal Attack Controllers using the simulator can coordinate with pilots in the air to identify and mark targets for air strikes from the ground

In a feat that combined live training and simulator training, we conducted a live, virtual and constructive demo. We took equipment that’s already on the aircraft that broadcasts the aircraft’s altitude, airspeed, position in real time, and we put a transmitter or receiver unit on the top of the building.

We were able to tap into that feed, and what that did was it took that feed of an actual aircraft on the range, and we piped it into simulator, and it was accurately recreated in the virtual workspace.

The aircraft is actually flying on the range and is properly displayed in the simulator with very low to minimal latency in a real-time altitude, airspeed, and attitude. So what that provided for is a real-time control of that aircraft with the ability to see the aircraft as well as have the ability to achieve visual recognition.

Marines are able to look up and actually assess the attitude and profile of that aircraft and then provide the clearance to essentially employ munitions on the desired intended target.

Before we had the simulator, we were really slow in the first few days on the range because that’s the first time operators did it. But now getting some practice time in, you get better control and better performance on the range with the live assets, so it makes it more efficient. So the simulator is really useful, it’s invaluable as far as getting Marines ready to go.

It has been harder and harder to get fleet aircraft that can support training due to a high operational tempo and due to challenges in keeping the aircraft ready to fly. The more training Marines can get on the range, the better they are when they actually get to an actual aircraft.

“So they’re not stumbling on Day 1, they’re already semi-proficient or trying to get there, whereas in the past before they had this simulator you’re a mess your first several times, so it’s good training for you, but for the guys airborne, they’re holding for a half hour just to get a bomb off because the guy on the ground is learning what to do.

The simulator has created a dramatic improvement in the first pass drop and the communications on the radio and everything. Marines work everything out here, so by the time that they’re on the range it’s just the real-life stuff that hits you. … A lot more first-pass drops, which is the whole goal of close-air support.

Want Marines will eventually be able to do is put this into a guy in a aircraft simulator and they’ll be running this simulator, talking to the guys in this simulator, and doing all their controls to get their currency requirements to satisfy their trainin while taking their targeting cues from other Marines in their own simulator.

A next step towards achieving that vision of connecting multiple simulators spread across the battlespace is the integrated training facility to house, all under one roof, simulators for pretty much anything in the carrier strike group.

We’ll be able to integrate them all together. Eventually we will be able to pipe in feeds from live aircraft out on our range – that’s the live part, and then vice versa hopefully we can pipe what’s being seen in the simulators, or what’s being constructed in the simulators, out to the live aircraft as well.

What we want to be able to do in the future, and this facility is the first step, is machine-to-machine data gathering. And that will allow us to gather large amounts of data – so not just necessarily how they did on that event, in the actual actions they took on that event, but we can also gather historical data on the aircraft, its system, how well the systems have held up.

We can look at, automatically, machine-to-machine, look at the pilot to assess proficiency, and see how much flight time was received recently, helping us build that bigger picture so we can inform leadership with the best information we can give them.

Despite the focus on high-end warfare technologies, aviators could face, equally dangerous less expensive threats like shoulder-launched anti-air missiles so we invested in Surface-to-Air Missile simulators to help ensure that pilots cycling through training events are aware of the threats they face on the ground and are flying with tactics that would keep them safe.

Though the SAM simulators aren’t connected to the planes in the air – so the pilot didn’t know in real-time he had been “shot” at with the simulator – the simulator logs video of the encounter. That video is incorporated into the pilot’s debrief after a training event, with the instructors explaining to the pilot whether his flight profile would have kept him safe or put him at risk to ground threats.

This is how you prove to Marines, you’re reachable, you need to be careful and you need to know what you’re doing, get your tactics right. Everything that’s out there is beatable, you’ve just got to know what you’re doing, but you’ve got to get your tactics right.

Readiness Tool allows top brass to determine which battalions and gear are most prepared for battle.

Marine Corps is experimenting with artificial intelligence to improve the way it deploys its forces and spot potential weaknesses years in advance.

The Marines built a tool that crunches data on personnel and equipment to measure how prepared individual battalions are for combat. The tool could ultimately help top brass deploy some 186,000 active-duty Marines and countless pieces of military hardware.

Allocating the service’s resources is an imperfect science. Leaders map out deployment strategies years or even decades in advance, but situations will invariably arise that throw a wrench in those plans.

Planners are constantly forced to “reshuffle the deck” as crises flare up in different places and figuring out which units to move around is a complicated process. Numerous factors—training, deployment history, equipment readiness and others—affect how prepared a group is for a given situation.

Today planners rely on spreadsheets, whiteboards and basic applications to track readiness and manage forces, but artificial intelligence can offer them a better understanding of the resources at their disposal and the long-term effects of the decisions they make.

The tech crunches both structured and unstructured data from multiple force management applications to create a real-time image of how prepared each unit is for combat. The tool specifically aims to build a five-year management plan for the Marine infantry battalions.

Tool has two primary functions: It flags the units that are most ready for action and explains why others come up short. Armed with that knowledge, commanders can proactively train and invest in less prepared groups before they fall even further behind.

“A lot of times Mairnes only invest more when the problem arises. Now they can see it ahead of time and say ‘OK, we’re going to take action now to prevent that from occurring.’”

The tool sheds light on how deployment decisions will affect forces in the long run. By analyzing historical trends along with real-time data, the tool could show how a unit’s readiness would change if it were, for instance, moved to a new location or given additional resources.

Marines are also building a separate AI system that ranks course of action plans based on those extrapolations, which could one day be merged with the readiness system.

“You integrate that all together and you get a full view of readiness across your force. Now you can really make some data-driven decisions.”

The next stage of the effort will include parts of the Marines’ aviation and logistics units, bringing about half branch into the purview of the program. With that additional data, the AI would further refine its processing rules to deliver better results.

So artificial intelligence is tasked with managing the particular deployments of troops in battle, moving them around in new and unexpected ways.

One way that future might manifest is by looking at a place where AI already manages workforce inventory-- like a warehouse stocking system, a process where items are unloaded wherever there is space in a warehouse and then scanned into a computer system than can track where the item is located.

When it comes time to retrieve an item for delivery, the same computer system directs warehouse workers to the most efficient route for finding the item, which could be stowed throughout the warehouse.

When modeling the warehouse system, it is interesting to consider how AI, given the same objectives as a commander, might organise and direct forces to achieve them.

“Why would an AI allocate forces in distinct areas of the battlefield? It could intermingle them and manage them at a granular level. Its categories are way more numerous, in the way that a warehouse AI manages categories at the shelf level.

Instead of distinct groupings of armor, air support, infantry, and artillery, a system run by artificial intelligence and managing a battle could coordinate a single helicopter with a pair of howitzers and an infantry platoon, directly grouping each in the same way that a warehouse worker finds an assortment of items to place into the same package.

“Anytime we’re on the road, our job, maintenance wise, is to provide safe and reliable jets for the pilots to accomplish their mission. Every new location presents a different challenge in how we get the job done, but the end goal for providing a safe jet for a pilot never changes. What does change is the environment in which we operate in.”

“Every exercise you go on is different, and it can be hard to start off. It could be not having the parts we need on hand, or not knowing how the base operates to get the support we need. Over time you figure out how to acquire some of that on site, what to bring along yourself and how to solve a problem before it becomes one.”

Here we consider how AI systems could be useful to a typical work order job of launch and recovery of aircraft, engine maintenance and servicing of life-saving equipment-- just a few of dozens of tasks Troops are expected to accomplish within a full day.

“We learn to operate in new environments, out here we’ve adapted our operations to give the best support possible. Maintenance is maintenance, our job never changes, but how we execute the mission does.”

“Our main mission is to enable successful sorties by generating aircraft parts, ultimately maintaining our full spectrum readiness. Our team encounters new repairs that force changes in direction and orders, but they all adapt and constantly find ways to make sure the job gets done.”

Maintaining the aging aircraft can be challenging as some parts are no longer commercially produced and the Fabrication Flight must collaborate and innovate to construct parts on their own.

“We all need each other in order to complete a task and make sure operations are done correctly. “Everything revolves in a circle – sheet metals technicians hand over parts to metals technicians who follow their technical order before sending to nondestructive inspection to make sure the piece is good for use on an aircraft.”

To display the teamwork necessary, the Troops walked us through the Fabrication Flight process.

Sheet metals technicians , kick off operations by receiving technical orders for aircraft repairs. Troops survey the technical order and pulls a thin, malleable sheet from their collection. The sheet is then cut to the specific measurements and handed off to a metals technician to be heat treated in a large oven.

"On our side we handle breaking the metal down and then crafting it to match the technical order for the specific part. When completed, the piece is hauled over to nondestructive inspection where tests are conducted to ensure the part is compositionally sound and aircraft ready.

"With the resources we have here, we are the final stop on a part's journey to an aircraft,” "If anything is wrong with the part, it's flagged and sent back to the workshop to either correct the issue, or start the operations all over again."

Accuracy in fabrication is essential in getting aircraft back up flying. When the part has completed all processes and is cleared for use, it is installed onto the aircraft, restoring readiness of the aircraft.

Fabrication flight Airmen gain a sense of accomplishment by witnessing their work come to fruition each time an aircraft takes off.

“Having combatant commands and other mission partners on base only adds to the importance of mission success. We take pride in the work of the flight, seeing the aircraft out there completing missions thanks to the maintenance here is an amazing feeling.”

By creating a virtual representation of an asset in the field using lightweight model “Digital Twin” visualisation, and then capturing info from smart sensors embedded in the asset, you can gain a complete picture of real-world performance and operating conditions. You can also simulate real-world scenario conditions for predictive operations.

Advances in virtual prototyping spaces has made possible the capability to simulating visual fidelity to a very high level. The next big challenge for virtual prototyping teams is simulating realistic interaction. Virtual prototyping, sometimes referred to as digital prototyping, is widely adopted by industry to simulate visual appearance and functionalities of production.

But conventional virtual prototyping techniques lack the simulation of the physical properties of a real interaction between user and product. Force feedback is based of development of virtual prototyping.

Virtual Reality tech creates an alternative reality in which worlds, objects and characters can be experienced that may not yet be available in reality so stakeholders are allowed to not only see the future product- achieved with concept sketch or mockup, but also experience the product and the interactions with its use context.

Simulation models as used in virtual engineering during development of training systems can be used during operation phases as well. In order to fully benefit from this, the simulation model must be connected to the physical system and other business operations In this way, information regarding past operation and current status can be fused with information regarding possible future operation, explored through virtual scenarios.

The overall result can be used for decision support in for instance operational planning or service and maintenance. In this way, simulation serves as a tool for arriving at a situation in which the future scenarios are perhaps not completely known, but in which one can readily address the most likely scenarios in an adequate manner.

Artificial intelligence can play a role in virtual manufacturing by improving simulation models or by offering better decision support. Extending the use of simulation models from the design phase to the operation phase also has advantages when new products are to be introduced or system needs to be reconfigured.

Virtual Reality is an attractive option since it offers the user a sense of being immersed in information where objects have a sense of ‘presence’ and allows them to interface with information at full scale if required. A design begins with an image or idea and the concept is disseminated via diagrams and descriptive speech.

Typically, information sources for conducting various virtual reality activities are not one single specific source, but instead all the different tech training information systems that are used in DoD The integration of these sources is not usually out-of-the-box-solution but most often highly customised solutions, engineered by specialists.

"Digital Twins" Provide Line of Sight to 3D Print Part Builds Previously Not Visible.Digital Twins are learning digital models of physical assets, parts, processes and even systems. The purpose of the Digital Twins is to relay data about the performance and properties of a physical counterpart. With this information, Digital Twins will achieve complete repeatability of a 3D printed part, and greatly improve process reliability.

Now we have a digital representation of what the designer/customer wants, we have the actual part that we can touch and feel and also a Digital Twin of that actual part. In 3D printing we can only work in the digital world with a 3D digital model of the desired component. Now we can build the part, according to the 3D model, take that physical component and carry out our own 3D scan, creating yet another 3D model.

Digital Twin of the actual part can then be sent back to the designers and he can compare what we have manufactured to what his model wants, and even use the actual part model to simulate its impact, digitally, in the final design.

In the case of a 3D printer, we’re building a Digital Twin of a build process and recording the slightest defects, deviations and other build characteristics. With Digital Twins, models will continually be updated with each new build and become ever smarter in recognising and troubleshooting any potential issues that might arise.

Not only will there be a Digital Twin of the component, showing the internal and external requirements, but also a Digital Twin of the process that made that part; the process parameters, how long did the build take, how many layers were built, were there any issues.. all of these aspects building a digital picture of the part enabling further analysis and confidence in final applications of components.

Next generation of Digital Twins incorporate information from other sensors monitoring the 3D printing process, such as the shape of the pool of metal rendered molten by the laser. In addition, this smart, real-time quality control will not function in isolation.

The power of Digital Twins is their ability to share insights with each other. So you can imagine many 3D print machines sharing unique build insights with each other that makes them each more informed about what to watch for during a build process.

Through the Digital Twin process, you can accelerate the production of mission-critical equipment. Using Digital Twin technology, we’re aiming to rapidly speed up the time that parts could be re-engineered or newly created using 3D printing processes.

The key challenge with 3D printing is being able to additively build a part that mirrors the exact material composition and properties of the original part that was formed through subtractive measures. With operation of mission-critical parts there is no room for deviations in material performance or manufacturing error.

Properties and serviceability of 3D printed components are affected by their geometry, microstructure and defects. These important attributes are currently optimised by trial and error because the essential process variables can’t currently be selected from scientific principles.

A solution is to build and validate a Digital Twin of the 3D printing process capable of predicting of the spatial and temporal variations of physical parameters affecting the structure and properties of components.

In principle, the Digital Twin of 3D printing process , when validated with accurate with experimental data would replace or reduce expensive, time consuming physical experiments with rapid inexpensive numerical experiments. In the initial phase, the Digital Twin would consider all the important 3D print process variables as input and provide a transient 3D model.

1. Systems design: Design before you build with a visual, simulation approach.

6. Establish an open, flexible simulation system: Such a system is necessary to incorporate information sets from multiple engineering domains and quality control

7. Align combat engineering teams for better collaboration: Disconnected combat engineering teams across mechanical and electrical systems working in their own workgroups must collaborate as needed-- utility of systems-level view of products must be evaluated

8. Balance vitality and stability: Balancing vitality of innovation with reuse and predictive stability during establishment of an innovation platform for simulation and during product design and engineering.

9. Unify simulation connected systems optimisation: A single view of cross-domain system, product, and process is required for successful simulations

10. Incorporate quality with design and development: Achieving high level of product quality is why simulation virtually validates systems-level view. Assuring Incorporate/embed quality information from the early-stage design through subsequent product phases is key so simulations can more easily flow from system designs into product attributes.

]]>Tue, 11 Dec 2018 16:38:03 GMThttp://www.marinemagnet.com/status-updates/top-50-digital-twin-virtual-prototype-questions-combine-connected-productnetwork-platformThe powerful utility of Digital Twins in Prototyping enables you to answer many fundamental questions without any physical prototype or testing. Everything is digital. So you are getting smarter about the operation of this product without spending lots of money to build anything physical.

With respect to developing products linked to networks, we’re in the early stages. But soon, some companies are going to want to sidestep development problems they’re experiencing. They’re going to realise that building and testing five rounds of prototypes is unacceptable. They’re going to realise months of delay completely undermines their competitive position in the marketplace.

As a result, those companies are going to want to adopt more proven and standardised practices. Virtually prototyping with Digital Twins is not there yet. But given the rush of companies toward network solutions, you should expect the demand for this practice to only increase.

The concept of a Digital Twin is mostly applied to the case where more insight is obtained from a physical operating product and a network platform. But in the case of virtual prototyping, the Digital Twin concept is applied to an old practice in mechanical hardware.

One approach to developing a smart connected product, one that hooks up with a network platform, is to just build it. Just piece it together. Throw some sensors on a product. Wire that to some kind of embedded system. Wire that to your antenna. Start sending data to an network platform. You and your organisation can actually learn a lot from going through that exercise.

While that needs to be done, you will quickly run into limitations on the experiments you can conduct with physical prototypes. Swapping out a sensor isn’t easy when it’s soldered in place. There might not be room, physically, for the sensor you really need for accurate measurement. You might run into too much electromagnetic interference for the antenna you planned to use.

Working through these issues is new to some organisations as they transition traditionally mechanical products to smart connected ones. However, the problems associated with resolving issues through physical prototyping isn’t new. In fact, that is an old concept when it comes to hardware. Long ago, mechanical and electrical engineers figured out that modeling and simulating a design virtually means you are more likely to get it right the first time when you get to prototyping and testing.

The benefits of an approach utilising virtual prototyping with digital twins are many. You have fewer rounds of prototyping, saving money and time. You have fewer change orders. You stay on schedule. You stay on budget.

So while virtual prototyping is new to some organisations, this approach has advantages when applied to the development of linked smart, connected products and networks platforms. Digital Twins are a key enabler.

How exactly can Digital Twins be used to virtually prototype smart, connected products and network platforms? You first need to set up the digital model component of a Digital Twin with one of the following:

Numerical Models: These models use machine learning and artificial intelligence tools. These applications or agents either extrapolate that data and/or correlate data to existing events. Both are an effort to predict future behaviour.

1D Simulation: These models are a combination of flow diagrams with equations or formulas behind the blocks that simulate the performance of embedded tools or multi-disciplinary engineering systems. These models can provide deeper insights into ongoing operation

3D Simulation: These models, often in the form of multi-body dynamics, are commonly used to predict the dynamics and structural performance of products. These models can provide deeper insights into ongoing operations.

For scenarios based on engineering physics or asset operation, no prototype or operating product exists. As such, there is no sensor data to feed this digital model. However, the model can be fed historical sensor data from prior products or even from past physical tests or operational data. In the worst case, a set of inputs can be modeled using statistics or even a higher level simulation, such as a multi-body dynamics model. This creates a set of input data that can be fed to the digital model.

The combination of that digital model and the input is enough to get started. You run the model as a simulation, generating data from virtual sensors, which are points of measurements from the simulation.

In this application, that virtually generated data is used instead of physically recorded data from sensors. That output can then be fed to a network platform as if it were receiving streaming data from a running product. Only, in this case, there is no physical product. There is only a virtual product that is running in a simulation.

In this scenario, you overcome many issues that you might experience when trying to physically prototype a smart, connected product.

You can change anything related to the sensor configuration, including placement or type.

You are not limited by network bandwidth other than the limitation between the compute resource running the Digital Twin and the one running the network platform.

You can change the product design in terms of mechanical or electrical hardware, embedded systems and more.There is a tremendous amount of flexibility with this approach.

So now you have this concept of using a Digital Twin to virtually prototype a smart, connected product that is linked to a network platform. What does that get you? Interestingly, it allows organisations to answer a set of serious questions.1. Is the systems configuration right for this product?

2. Do we need to use a physical sensor to capture this data, or can it be a virtual sensor?

3. Is the data we want to flow to the network platform limited by bandwidth?

4. Is edge processing required for the sensor data?

5. What data should be processed on the product versus being fed to the network platform?

6. Are there changes that should be considered to improve placement of sensors?

7. Are there changes that should be made to avoid interference?

8. How will connected product and network platform work to fulfill requirements?

9. What conclusions can be drawn from the data once it is in the network platform?

10. What data trends are precursors to events critical to the connected product?

To ensure digital networking of production systems and the optimisation of material-specific requirements, we need to measure, assess and replicate the changes in material properties in a process where "Digital Twins" of materials are created.

The materials digital space has laid the groundwork for this process. When a finished part rolls off the production line, this is one of the first questions always asked: "Does this component have the properties we want?"

Often, even the tiniest of variations in the production environment are enough to alter a part’s material properties – and throw its functionality into question.

Manufacturers avoid this by close inspection of samples throughout the production process. Breaking down the samples into their composite parts and measuring them separately is an extremely time-consuming process.

"The outcome of the sample testing process branches out into an array of different subsets, each with their own specific measurement results. While experts may be able to keep an overview of the complex interrelationships in their heads, until now there has been no way to take the diversity of resulting data and portray it in a coherent digital format."

Now, for the first time, a proof of concept has been developed demonstrating that it is possible to digitally represent many such material processing cycles with a materials data space for test specimens produced using additive manufacturing.

"The data space concept allows us to integrate any type of material information into a digital network – a really valuable tool. We want to use the materials data space to automatically generate a digital twin of each material that will mirror the current state of the physical object under examination."

Data spaces can be used to integrate all types of materials information into digital networks. The advantage of the materials data space is that it provides an overview of all relevant parameters at a glance, whereas formerly data on different material parameters was scattered among numerous data repositories in many different formats.

But the real promise lies in the future. "In the years to come, the materials data space has the potential to become the production command center. Whenever component quality isn’t up to the expected standard, you can compare it with information on previous components stored in the materials data space to determine whether the present component can in fact be used or whether it must be rejected.

In the future, these results could be automatically integrated into industrial decision-making processes: whenever component quality dips below the required standard, production automatically comes to a halt.

Creating the data space –and managing the diversity of materials data – calls for a corresponding information model. "In this case, the model reflects the natural material world, in which material states and properties are assigned to defined categories.

The best way of thinking about it is in terms of a social network where each user is a node in the network. And in turn, these nodes have their own subject matter associations. What we do is to create semantic relationships between the individual material objects and their associated processing steps.

Then there are also interrelationships among these communities. What would be a “follow” on social media is represented in the materials data space by details on the chronological sequence of production or work steps, for instance "leaving the additive manufacturing process" or "this laser is part of the 3D printing process".

The new demonstrator for additively manufactured metal components has the capacity to generate samples, characterize the materials they contain, conduct subsequent data analysis and determine material properties.

Thanks to the logic underpinning the model, users can make extremely complex queries of the data space that simply wouldn’t be possible with the same degree of flexibility in the case of a conventional database. 1. Creation of quality prediction models: Collected sensor/process network data crunched with AI techniques to build quality prediction models

3. Creation of production schedules: After production simulation schedules include machine/production target outputs specified to timetable and sent to reference coordinator builder to lead into quality/productivity detector

4. Request for reference performance indicators: Coordinator delivers the production schedules to performance indicator simulation sub-system to be used in production-monitoring standards

5. Transmission of reference performance indicators: Reference models created based on simulation results serves as the criteria for manufacturing execution monitoring criteria to include target quality and production per unit time per process

6. Quality analysis and prediction: Quality and productivity detector predicts the quality and productivity in real time using quality prediction model and reference performance indicator

8. Request for future performance indicators: Predict future performance indicators caused by responses to irregularities and current schedule sent to the performance indicator simulation sub-system.

9. Request for new schedules: Coordinator requests new schedule when difference between change in initial/future performance due to irregularities is significant

10. Transmission of visualisation information: Progress of entire systems is simultaneously sent to dashboard in the form of information visible to the user through graphical user interface.Top 10 “Digital Twin” Engineering Team Strategies Define Agent-Machine Interaction Focus Configuration Programme Goals

1. Formalise planning development, integration, and use of models to inform enterprise, programme decision making, support engineering activities to digitally represent the system of interest

2. Ensure models are accurate, complete, usable across disciplines to support communication, collaboration and performance and decision making across lifecycle activities

]]>Sat, 01 Dec 2018 15:37:31 GMThttp://www.marinemagnet.com/status-updates/top-10-structure-assessment-tool-requirements-provide-evaluation-of-status-update-frameworkCurrent readiness systems only include commander’s best estimate for equipment status. Estimates have traditionally been utilised usually for overall equipment assigned to the unit and not individual pieces of equipment.

Military Services use systems to maintain records of equipment under service, but records do not include any information about what units it is assigned to.

Central to the work presented here was the development of a tool, the Marine Air-Ground Task Force Equipment Structural Assessment, which was loosely based on previously developed plan

Inputs to the system consist of a MEU equipment list, the tasks identified through the mission deconstruction process, the measures and metrics used to define equipment capabilities, and the set of linkages between tasks and equipment.

What equipment is available to the MEU to accomplish mission tasks and subtasks? A diverse set of factors affect the types of equipment aboard a MEU, including not only space available but also risk trade-offs made by commanders and expectations about the mission on deployment.

Since there is no standardised table of equipment for a MEU, the study team obtained a list of equipment assigned to a recent MEU, which included information on what was embarked and what was left behind.

What measures and metrics should be used to assess the capability of selected equipment? The loading list provided the set of available equipment. We then used equipment manuals and sponsor input to define the capabilities of each piece of equipment in performing designated tasks. This information is displayed to the user when a piece of equipment is selected

We identified the measures and metrics, or “planning factors,” needed to assess the capability of each piece of equipment in the loading list. In our initial construction of the tool, we identified which alternative equipment might accomplish a task, but not as effectively.

We concluded that either we had it wrong or that other equipment might do just as well or better than what we selected. Consequently, equipment selection is now up to the user of the tool. The upgrades proceeded in parallel with these activities.

We have not received official Marine Corps approval to run our application at Job Sites. The programme is a valuable materiel readiness information tool designed and Tested for Marine Corps.

Users training on existing systems, even though we have designed logical architecture and operation modes, we have found concerning lack of training on logistics information tools throughout Marine Corps.

Tool supports this objective by asking the user to define mission-specific characteristics and allowing the user to tailor equipment lists, equipment priority, and task priority as appropriate.

The approach used in this report is for the user to use the tool to facilitate the development of planning factors by the user and use the tool to assign equipment to tasks.

This process provides a framework that MEU commanders can use to develop mission plans and understand where equipment shortfalls are likely. The process consists of simple steps that translate mission requirements into tasks, subtasks, and military activities, each of which is linked directly to the types of equipment needed for completion. It also highlights key parameters that may affect the types of equipment needed or the execution of key tasks.

Site Visit Executive can first look at broad readiness, but can also look at readiness levels of subordinate units to provide the ability to control, distribute, and replenish equipment and supplies in assigned areas of operation, to receive supply support from and provide supply support to other services

Readiness Terms are used in different contexts/processes. Operational gaps in systems used by Marine Units must be closed so exchange is seamless. Capability to link information as it is processed by Units must be built.

Aggregated information provided to Commanders must be traced/linked to operational systems used to rollup information. But no Marine Site Executive has yet stood up to identify functions spanning across process and write terms required to support processes

If Site Visit Executive has better overview of equipment status, resources will be allocated/pooled more efficiently so greatest potential for operational readiness is realised.

Information from readiness systems is required to determine number of pieces of equipment available for deployment. No Site Executive has created an easy way to link equipment information available from readiness and Services systems.

Technological advances in production and distribution can strengthen the Navy and Marine Corps aviation parts supply chain. Improved spare parts logistics systems and 3D printing will increase flight availabilities and decrease costs. 3D printing is the headline of how far we’ve come with efficiencies, both at the fleet readiness centers and out in the field.

The entire spare part logistics system has the potential be sped up with the use of 3D printing. With forward deployed forces, addition of 3D printing increases availability and save costs by quickly producing small replacement parts onsite instead of waiting for the supply chain to send equipment far off.

In addition, 3D printing as a way for the industry to quickly manufacture the parts needed by aircraft maintainers without necessarily having to sink money into new machinery to make specialised components not frequently requested.

Ultimately, this on-demand manufacturing will help companies control their costs. The only limiting factor is the ability for 3D printers to create air-worthy parts. “We’re at the front end of this. There are parts that require airworthiness for approval and the non-air worthiness, the non-airworthiness are easier to do.“

Maintenance portion of an aircraft program is of equal importance as new acquisitions in keeping costs down. “We got to operate it, and sustain it, and fly it for the lifecycle. So understanding your supply chain and making sure it’s robust is key.”

A new logistics sustainment system Marine maintainers are trying will help both the service and industrial base adjust their ability to purchase and manufacture replacement parts. The new system prioritises how to allocate replacement parts to aircraft based on how quickly it will return to service after the part arrives.

Consider the fate of two aircraft from different squadrons. Both are grounded, and each requires the same replacement part, but one of the aircraft needs additional other work done to get back in the air.

Under the current system, the part goes to the maintainers who request it first, even if this aircraft needs additional work resulting in being grounded for weeks. Meanwhile, the aircraft that only required the one part could’ve been ready sooner, but remains unavailable while waiting for part delivery.

“We’re now using supply optimisation tools that are taking a look across a base, and not only a base but across a type, model series. As an example, a long lead-time part is coming in, so what airplane benefits most from that? That’s one area where we’re using agent learning to make decision making.”

We got the chance to learn about the latest technologies showing potential for Marine Corps deployment. We are covering this expo as a team who has never been to one, so here is our perspective on the experience.

Even though we had never been to a Marine expo before, we had a pretty good idea of just what it will entail. We were expecting multiple companies to be set up in theatre showing off their latest tech advancements and best manoeuvre practices.

These expos seem like a great way to learn about Marines and connect with one another for possible future partnerships or work. In a nut shell, we believe this advanced expo is going to house a lot of tech and experienced Marines and we should be able to learn a lot from it.

When we walked through the doors and entered this expo, it seemed like the number of Marine suppliers was infinite. Everywhere you looked there was something new to look at and learn about. The first supplier that grabbed my eye was a growing company from who had designed a sorting system for small parts.

The group realised their designs potential for larger parts and how it could be applied to multiple different industries. The group upscaled their design and went from one machine designed for extremely small parts, to an array of multiple machines each having the ability to sort different sized part much larger than the first design. It was incredible to see how one design could be changed only slightly and have so many different applications for Marines.

There were no limitations when it came to suppliers and the number of different AORs present at the expo. There were solutions for warehouse storage, automation, sorting, milling bits, and even 3D printing. One of our favorite booth we visited was run by a company who had all the 3D printing solutions you could ever need, even for new areas of the field.

These suppliers were displaying their new desktop metal 3D printer and its ability to print in metal.3D printing in metal was something we had heard about before, but imagined it had only been done by a very small number of Marine companies.

Right there in front of us this printer was creating quite incredible parts all in metal. We were told these prints had similar strength qualities as cast metal parts and were printed using a type of metal powder mixed with wax. This was incredible to hear if you had only ever experienced 3D printed plastic parts before. Looking around the booth it was easy to see all the Marines gears turning as they were able to see all the applications this desktop metal printer could be used for their own field operations.

Going into this expo we didn’t have much knowledge about Marine manoeuvres. After speaking with suppliers showcasing part sorting/packing systems, we were surprised to learn a significant amount about sensor sorting systems. These machines simply take images of the sorting bed and use tools to tell which items to grab or re-sort.

We saw how easy 3D printing in metal and its benefits compared to plastic. With this machine being able to create custom metal parts that no other manufacturing process can create, it is easy to see the endless possibilities and applications for this process.

An expo is a great place to satisfy our curiosity. With all kinds of new technology and processes you can check out all the booths for hours finding the answers to all your pressing questions about the Marines.

These expos don’t need to be just for Marines. We were welcome to learn a little more about the processes and machines used to make products. Overall, going to this advanced expo was a great experience and we will be attending many more in the future.

Even while the groups and organisations hyping artificial intelligence solutions popped up everywhere at the expos with promises to create the next battlefield advantage using next generation weapons, gear, or satellites. The term artificial intelligence splashes the headlines with promises that we’re moments away from revolutionising the battlefield.

It’s frustrating. The special AI “task forces” and their massive budgets are great, but it’s time to get honest about the rest of the military.

Ask any Marine their opinion of how things run on a daily basis and you will hear complaints about lost orders, broken gear, and outdated technology.

Bottom line: all those flashy AI applications being touted as perfect for Marines use are not going to run on the outdated infrastructure on which a majority of the military still operates.

That doesn’t mean that AI isn’t a good fit or shouldn’t be pursued. But it does mean that AI success requires a force readiness approach. First, AI isn’t new and it isn’t new to the military.

Marketing hype around the term has experienced a surge lately but the fact that something wasn’t tagged as artificial intelligence historically does not take away the fact that it was actually AI.

Despite the hype, AI is simply a field of science that trains systems to perform some human task through learning and automation. There are varying degrees of sophistication but most of the mining, network assessment constructs and mapping technology used over the past decade or more have all been forms of AI.

Weapons systems and combat vehicles have been leveraging AI for many years as well. So don’t let the noise change the focus from the mission need.

There are varying degrees of sophistication but most of the data mining, network engineering and mapping technology used over the past decade or more have all been forms of AI. Weapons systems and combat vehicles have been leveraging AI for many years as well.

Marines on the front lines need their supporting forces to be trained and armed with the appropriate technology to support the advances being operationalised on the battlefield. If we look specifically at the intelligence arena, the vast majority of military intelligence analysts are still using the same products and systems from 10-15 years ago.

Efforts around collecting intelligence are ripe for sophistication, but what about the Marines that have to sift through and make sense of that additional data? How has their training changed to account for a more technologically advanced battlespace? How do products and solutions integrated requirements and workflows with real time information truly augment their efforts?

The majority of data mining and visualisation tools on the market have flashier interfaces than we saw a decade ago, but the true sophistication of what the vast majority of Marines have been offered doesn’t really reflect the decade of advancements seen in the commercial market.

You don’t have to be a part of a high profile AI initiative to find value in the science for nearly all areas of the military. We need the whole force to have the technical advantage on the battlefield and that means AI must become a force readiness initiative.

It’s all about augmenting human efforts across battalions, regiments and divisions to raise the readiness levels of the entire force. Marines inside the wire should have the knowledge, technical skills and agility to support all of the operations and technology our troops outside the wire are running.

Then there’s the applicability across all military systems.

An “AI watchman” could prevent ships from colliding with one another since the computers are “constantly looking at sensor data and is making sense of the environment and the situation.”

“There is that safety aspect of using artificial intelligence to augment the level of capability and intelligence available on ships, on tanks, in aircraft, all over, where you almost have an embedded AI technician be part of every military asset.

“That is a capability and it leads to benefits that are tremendous. And the possibilities may be endless. There’s an easy answer to the question: Where can AI be applied? “It can be applied literally everywhere.

“The biggest part of the problem of artificial intelligence is: they build these incredibly long algorithms with all of these gates to go through. They push all of this machine learning and data through it. Frankly, we are not entirely sure how all of that works, all the time.

1. Use language processing and machine learning to automatically classify and match incoming data to indicators and warnings being monitored. Provide alerts on trending topics, keywords or themes that may indicate emerging tactics, techniques and procedures.

2. Display both geographic and temporal representation of multi-intelligence data with a natural language generated summary of the data. Include the ability to break data into individual entities as needed and internalise analyst annotations into the automated summary.

3. Automatically map finished intelligence products to the priority intelligence requirement to help answer with automated caveat classification of documents tied to user permissions. Include smart search capabilities to that repository so analysts can find relevant products more efficiently.

4. Monitor human developed courses of action beside computer generated courses of action including the criteria for suitability, feasibility, acceptability, uniqueness and completeness. A machine will see information differently than its human counterparts and may identify behaviour discrepancies present in data that human analysts may miss due to the sheer volume and complexity of reporting that an intelligence analyst is presented with.

5. Capture workflows and product development in a shared space so knowledge gaps are reduced between shifts or rotations. Use automation to track knowledge gaps and alert users to update analysis and finished products when significant knowledge gaps are filled to tag/map intelligence gaps as new information comes in and alert users to the new information.

6. Measure impact of operational intelligence and associated collections or requests that contributed to that intelligence by automating inputs and processes serving as operational measurements.

7. Add cognitive search into the massive data repositories analysts are required to sift through to move beyond keyword search and enable contextual search at an enterprise level.

8. Provide in-depth training on AI systems and set standards on how technology augments the human analytic process without replacing the analyst behind the screen. In short, tie the technology into existing workflows and adjust workflows to account for technological innovation.

9. Conduct initiatives in parallel with operations to ensure force efforts are complimentary and requirements are aligned with discrepancy alerts or gaps in the operational plan and the intelligence needed to execute it.

10. Invest in both garrison and tactical systems and infrastructure that are capable of running and sustaining the increased computing power that comes with training and deploying AI programmes

]]>Sat, 01 Dec 2018 15:12:54 GMThttp://www.marinemagnet.com/status-updates/top-10-blockchain-rules-consider-work-order-consensus-agent-cooperation-bucket-brigade-routesWe set up experiment involving Digital Twin” robots learning to work together: one robot ideally handing off items to the other, which in turn carries them to a final destination.

Bucket brigade systems are a tool to build robust simulated robot control systems. This choice is sufficient to achieve adequate levels of performance for a variety of behaviours. The parallel implementation of the bucket brigade system would speed up the training process and implement robotics controller.

Bucket brigade systems provide guidance shortening the number of cycles required to learn task rules using only a few training examples starting with classifiers that were randomly generated.

The Remote Access Nondestructive Evaluation system is a snake-like robotic arm tool that fits into small spaces of an aircraft to perform inspections. Maintainers usually have to remove whatever hard-to-reach component and crawl inside the small area.

The robotic arm can manoeuvre through access ports with small as diameter and serve as an agents eyes as it moves around inside the aircraft, saving time and eliminating the need to take the aircraft apart. The robot is currently a prototype and is ready for the programme office to request exact specifications for use on certain aircraft.

Platform integrated into commercial service robots so autonomous navigation stack doesn’t have to be built from scratch. This is interesting stuff that is potentially game-changing in terms of the cost able to command volume pricing on sensors that we can’t. And we are engineering things to fit together nicely.

Getting tools right is especially challenging for robots that will be autonomously navigating in complex scenarios such as airstrips and other high-traffic workspaces. These areas often have tight spaces and continuously changing obstacles that require complex routes. The challenge is creating tool to handle these issues with the end-user in mind.

The robot must involve minimal training for operators, no battle space setup, single-shot learning by demonstration, and productivity reporting.

Variables impacting autonomous navigation are not limited to physical obstacles crowding a robot’s work space. Feature-less work space, and even time of day, add complexity to autonomous navigation. Many of these types of hurdles are edge cases that do not present themselves until after tools have been developed and the robots are tested in a live mission space. Edge cases are the punch you don’t see coming.

For example, being able to navigate in a cluttered, dynamic workspace with a lot of troops moving around is difficult because there’s a lack of features. There’s nothing, really, to anchor or tie into when you’re building your map.”

Success is contingent upon getting your robot, and the tools that runs it, into many different workspaces early on. Functional autonomous navigation systems are not developed in a lab. It’s fine to begin development there to create a demo, or to get funding. But those stages are the limit for lab testing.

It won’t work until you’ve been in a number of scenarios because the problems that you’re going to experience in theatre-specific deployments cannot be replicated in a lab. You can’t solve or anticipate every edge case your robot will encounter in the real world. Keeping the troops involved in the installation process and giving them the tools to troubleshoot issues in real-time can improve your robot’s efficiency.

Examples include when robot mistaking light from a reflective surface as a physical object or infrared heaters disrupting the robot’s path. Essentially, the more edge cases you can solve, the better your navigation solution.

The key to designing a robot with autonomous navigation is creating a system that has precise and accurate motion control.

“For a lot of robots, you’re just taking a robot from point A to point B, but with field applications like equipment distribution machines, they need to drive as close to an edge, as close to a wall, as close to an obstacle as possible to maximise the floor coverage provided.”

Highly accurate motion control is imperative if you want your robot to be able to handle complex, tight spaces. That’s something you can’t do with a robot that has much larger footprint. Designing a system that is as tight and accurate as possible give you much better capabilities to navigate complex spaces.

Detection by actual troops is crucial for expanding end-user applications. If your robot can’t tell a set of troops from a package on the floor, you’ve hamstrung your unit manoeuvre before it starts.

“If you’re developing your own tools, if you are looking for navigation systems to use in your robotics project, then having a system that can recognise Troops as different to obstacles is essential.

Unless you’re going to clear everybody out in the space the robot works in, which limits the applications, you really need to solve the workforce element. This is one of the biggest problems you need to solve.

But being over cautious also has its problems. In running some initial pilot tests with the equipment distribution machine, the robots were checking, pausing, and analyzing for the sake of safety so often it made Troops less comfortable around them. People thought the robots weren’t intelligent, thus making them feel uncomfortable around the robots. It’s critical that you use sensor data from real-world scenarios and virtual space to reduce false positives.

For your product to be scalable, the installation process must be simple, not technical. Many of today’s robots require an engineer for installation into new space since process is simply beyond the skillset of non-technical staff. This in-depth and technically complex launch can bottleneck this critical early stage; having an engineer sent on-site to every new Troops customer is not sustainable or scalable.

One way to counteract this challenge is to have your customers identify workforce who might be capable of taking on installation as a new project. Another is to do some preventative maintenance regarding design. You want your robot to be familiar to products your customers have used before. Make sure the user interface is lean and intuitive.

“We wanted to keep it as simple as possible… As you can see in this screenshot right here, there are only two choices for the user: Choose a route or teach a route. The user can use the machine the way they always have and while they are doing that, it creates a map of the space and records the routes, or they set to play.”

“Trusting” Robotic System to Make Quality Parts Opens Door to Build Usable Parts When and Where You are WorkingConsider sustainment and how a maintainer can print a replacement part at sea, or a mechanic print a replacement part for a truck deep in the desert. This takes 3-D printing to the next, big step of deployment.

We are exploring how machine learning and artificial intelligence can make complex 3D printing more reliable and save hours of tedious post-production inspections.

In modern factories, 3D printing parts requires persistent monitoring by specialists to ensure intricate parts are produced without impurities and imperfections that can compromise the integrity of the part overall. To improve this labor intensive process, we are developing multi-axis robots that use lasers to deposit material and oversee the printing of parts.

Initial work will focus on developing computer models that can predict the microstructures and mechanical properties of 3D printed materials to generate simulation data to train with, looking at variables such as, the spot size of the laser beam, the rate of feed of the titanium wire and the total amount energy density input into the material while it is being manufactured.

This information helps the team predict the microstructure, or organisational structure of a material on a very small scale, that influences the physical properties of the additive manufactured part.

Information will be plugged into a model that predicts the mechanical properties of the printed component. By taking temperature and spot size measurements, the team can ensure they accurately controlling energy density, the power of both the laser and the hot wire that goes into the process.

All of that is happening before you actually try to do any kind of machine learning or artificial networks with the robot itself. That’s just to try to train the models to the point where we have confidence in the models.

One key problem could come in cleaning up the data and removing excess noise from the measurements. Thermal measurements are pretty easy and not data intensive, but when you start looking at optical measurements you can collect just an enormous amount of data that is difficult to manage.

We want to learn how shrink the size of that dataset without sacrificing key parameters, compressing and manipulating this data to extract the key information needed to train the algorithms.

Robots will begin producing 3D titanium parts and learn how to reliably construct geometrically and structurally sound parts. This portion of the program will confront challenges from the additive manufacturing and AI components of the project.

On the additive manufacturing side, the team will work with new manufacturing process, trying to understand exactly what the primary, secondary and tertiary interactions are between all those different process parameters.

As you are building the part depending on the geometric complexity, now those interactions change based on the path the robot has to take to manufacture that part. One of the biggest challenges is going to be to understand exactly which of those parameters are the primary, which are the tertiary and to what level of control we need to be able to manipulate or control those process parameters in order to generate the confidence in the parts that we want.

At the same time, AI machine learning challenges need to be tackled Like with other AI programs, it’s crucial the communication interface is learning the right information, the right way. The models will give the communication interface a good starting point, but this will be an iterative process that depends on the communication interface ability to self-correct.

At some point, there are some inaccuracies that could come into that model so now, the system itself has to understand it may be getting into a regime that is not going to produce the mechanical properties or microstructures that you want, and be able to self-correct to make certain that instead of going into that regime it goes into a regime that produces the geometric part that you want.

With a complete communication interface that can be trusted to produce structurally sound 3D printed parts, time-consuming post-production inspections will become a thing of the past.

Instead of nondestructive inspections and evaluations, if you have enough control on the process, enough in situ measurements, enough models to show that that process and the robot performed exactly as you thought it would, and produced a part that you know what its capabilities are going to be, you can immediately deploy that part.

That’s the end game, that’s what we’re trying to get to, is to build the quality into the part instead of inspecting it in afterwards.

Confidence in 3D printed parts could have dramatic consequences for soldiers are across the services. As opposed to waiting for replacement parts, service members could readily search a database of components, find the part they need and have a replacement they can trust in hours rather than days or weeks.

Advanced Weapons System Sustainment Team Strategies Utilise New Workflow Tools to Improve Tracking of Product Requirements

While sustainment strategies do not guarantee successful outcomes, they serve as a tool to guide operations as well as support planning and implementation of activities through the life-cycle of the aircraft. Specifically, at a high-level the strategy is aimed at integrating requirements, product support elements, funding, and risk management to provide oversight of the aircraft.

For example, these sustainment strategies can be documented in a life-cycle sustainment plan, postproduction support plan, or an in-service support plan, among other types of documented strategies.

Additionally, program officials stated aircraft sustainment strategies are an important management tool for the sustainment of the aircraft by documenting requirements that are known by all stakeholders, including good practices identified in sustaining each aircraft.

Agents are integrating time-saving applications into their own workflow with proven concepts across commands and services. These time-saving tools enable simultaneous access to data such as requirements, program notes and contractor documents.

You can move from a system where you’re waiting for status updates with comments to come back from an engineer or contractor to logging into a system where you can see what they’re working on, in real time, so team can work on a project without any lag time.

Acquisition rules require contractors, program managers, engineers and other stakeholders to work together but it usually involves dozens of technical documents to move from contractor to DoD on any given day, creating a nightmare for acquisition officials charged with ensuring each document is reviewed and meets expectations

Programs find themselves overwhelmed by the sheer volume of data, putting the program at risk, if data spends too long in the review state, missing the contractual response deadline.

Without a vector check from DoD, the contractor assumes a conditional acceptance and moves forward, assuming they are on the right track. This could lead to a host of problems, including a possible schedule slip.”

The team is now spreading the proven results of tools and showing individual operational units can use the tools to suit various purposes. Versatility is inherent in the review and content-sharing applications, so new users usually recognize ways to streamline the bulk of documents that shuttle through an acquisition enterprise.

The objective is simple: information dominance through the creation, review, approval and dissemination of data. If you have unrefined, makeshift processes, this won’t work for you, at least not yet. Business processes work more efficient with automation so there are now options if your workflow is functional, but uses outdated tools or requires intense use of workforce capital.

“The advantage of the utilities lies in the ability to collaborate on existing work, review past work and evolve the system architecture to meet changing needs. These tools provide value now, and value in the future by giving agents access to program work performed. Every agent in the “Blockchain” can see, through comment tracking, how data changed to capture the correct info and meet requirements.

This fidelity, coupled with the capacity to short-circuit future process complications, makes work order content-sharing applications ideal for long-running, complex weapons systems sustainment program offices looking to limit volume of work spent on acquisition schedule.

The parallel bucket-brigade content for sustainment execution, operational manoeuvre etc. communication interface is evaluated in the implementation of space-time parallel applications for massively parallel machines. It is shown that the simplified version of in-time contract content work order rules is valid in time-dependent problems, and that it can be implemented as the form of the bucket-brigade communication with simple computations.

Must evaluate performance of several configurations by the use of interface for the bucket-brigade communications. On the basis of robot/agent for sustainment business operation performance measurements or any other practical applications, effective strategies for the further tuning toward large scale computing should be discussed

There is way to parallelise operations dynamically on a “Blockchain” while maintaining both decentralisation and security. Looking on the internet, no one spelled out this design in particular. It might actually help with scaling.

In contrast to just breaking up the whole system into smaller parts like you do in a bucket brigade, this design provides a much higher adaptability to the current utilisation of the communication network, however, does not improve the storage consumption on each node like breaking up into small parts does.

There’s no reason not to implement this strategy on top of current practise for increasing flexibility and weapons system sustainment work order transaction throughput. Another factor to consider is that many of the top “Blockchain” projects are still utilising proof of work. The process of changing agent consensus rules can be rather difficult for any project and requires some time.

We show that collaboration is achieved only when robots are rewarded based on a non-discounted global reward averaged over time, concluding with work in agent modeling under communication.

We have used communication protocol for agents to subcontract subtasks to other agents. In this approach, each agent tries to decompose tasks into simpler subtasks and broadcasts announcements about the subtasks to all other agents in order to find “contractors” who can solve them more easily.

When agents are capable of learning about other agents’ task-solving abilities, communication is reduced from broadcasting to everyone to communicating exact messages to only those agents that have high probabilities to win the bids for those tasks.

A related approach is presented where learning is used to incrementally update models of other agents to reduce communication load by anticipating their future actions based on their previous ones.

Case-based learning has also been used to develop successful joint plans based on one’s historical expectations of other agents’ action. Multi-agent learning is a new field and its open research issues are still very much in development. Here, we single out several issues we observed recurring while surveying past implementation efforts.

We believe these specific areas have proven themselves important open questions to tackle in order to make multi-agent learning more broadly successful as a technique.

These issues arise from multi-agent learning, and may eventually require new learning methods special to multiple agents, as opposed to the more conventional single-agent learning methods of case-based learning, reinforcement learning, traditional developments computation now common in the field.

Shaping, layered learning, and fitness switching, are not multi-agent learning techniques, but they have often been applied in such a context.

Less work has been done on formal methods for decomposing tasks and behaviours into appropriate for multi-agent solutions, how agents’ sub-behaviours interact, and how /when the learning of these sub-behaviours may be constructed in parallel.

Consider robot soccer as an example: while it is true that agents must learn to acquire a ball and to kick it before they can learn to pass the ball, their counterparts must also have learned to receive the ball, and to ramp up difficulty, opponents may simultaneously/co-adaptively learn to intercept the ball.

Not much attention has been paid to examine how to form these “decomposition dependency graphs”, much less have the learning system develop them automatically.

Yet to construct the learning process in parallel to simplify the search space, and reduce more robust multi-agent behaviours, understanding these interactions is important. One notable exception occurs that in many domains the actions of some agents may be independent.

Predator-Prey pursuit is one of the most common work spaces in multi-agent learning research, and it is easy to implement. Pursuit games consist of a number of predator agents cooperatively chasing a prey. Individual predator agents are usually not faster than the prey, and often agents can sense the prey only if it is closeby. Therefore, the agents need to actively cooperate in order to successfully capture the prey.

The goal of our leveled consensus “sustainment contracting” mechanism is to allow some fexibility as in the case with no commitment while guaranteeing agents some level of security as in the case of full commitment.

Full commitment contracts can be viewed as one end of a spectrum where commitment-free contracts are at the other end. Leveled commitment contracts span this entire spectrum based on how the decommiting penalties are chosen. Leveled commitment is desirable because it speeds up the negotiation process by increasing parallelism.

An agent can make mutually exclusive low-commitment offers to multiple agents. In the case more than one accepts, the agent can backtrack from all but one so agent can address the other parties in parallel instead of addressing them one at a time and blocking to wait for an answer before addressing the next.

For example, if an agent wants one particular contract, it can offer that contract to several parties with meaningful commitment instead of no commitment at all that would be strategically meaningless. Load balancing is crucial for parallel applications since it is representative of good use of the capacity of the parallel processing units.

Here we look at applications putting high demand on the parallel interconnect in terms of throughput. Examples of such applications are compression applications which both process important amounts of data and require a lot of computations undesirable for most parallel architectures. The problem is exacerbated when working with heterogenous parallel hardware. This is the case when using a heterogenous cluster to execute parallel application

Reinforcement consists of redistributing bids made between subsequently chosen rules. The bid of each winner at each time-step is placed in a "bucket". A record is kept of the winners on the previous time step and they each receive an equal share of the contents of the current bucket; fitness is shared amongst activated rules. If a reward is received from the work space communications then this is paid to the winning rule which produced the last output. Each rule is much like the middleman in Bucket Brigade Blockchain.

1. Work forward and continue to pick units for your job on the forward line

2. When you exchange work with your successor, then work backward

3. If you are the last worker when you reach the end of the forward line transfer your job to the backward line and work backward;

4. If you catch up with your successor, who is crossing the aisle, then wait.

5. Work backward and continue to pick units for your job on the backward line

6. When you exchange work with your predecessor, then work forward

7. If you are the first worker to complete your job at the end of the backward line initiate a new job and work forward

8. If you catch up with your predecessor, who is crossing the aisle, then wait.

9. If you are on the forward line, remain idle until your successor finishes crossing the aisle, then work forward

10. If you are on the backward line, remain idle until your predecessor finishes crossing the aisle, then work backward.

]]>Thu, 22 Nov 2018 17:40:01 GMThttp://www.marinemagnet.com/status-updates/top-10-digital-twin-framework-objects-close-engineeringsimulation-gap-introduce-structure-configurationCommon means of describing complex systems using object orientation represents specification narratives, but there are no details given on how/when features are linked together during progress made in systems creation. The main concerns about using object-oriented narratives for real-time embedded systems is about speed/size characteristics of builder to be utilised.

Some points in support of object-oriented narratives for embedded systems include requirements for objects to be efficient so Site Visit Executive can write about larger systems with fewer defects. Obtaining results in less time is realised using Digital Twin simulation techniques instead of structured methods, and advances can be implemented in assembly narratives, in addition to others.

An integrated engineering network, spanning across the entire value chain, is operated to intelligently connect various service divisions, and to generate a work space for products/services. The conditions for the Digital Twin are determined in which the digital space can be fed into the real, and the real world back into the digital to deal such intelligent products with rising variations.

Digital twins allow you to access large amounts of data in real time. But you don’t have to keep all that data to yourself. In fact, you’d be wise to share it. Creating a digital twin network makes it easy to share data with internal workforce, external supply chain partners, and even customers. With access to the same insight, you, your partners, and your customers can collaborate to improve products and processes.

Supply chain partners benefit from a network of Digital Twins with enhanced visibility. If an asset malfunctions, your maintenance provider knows it needs to mobilise a team to fix the equipment. If your company manufactures a product ahead of schedule, your logistics provider knows it can pick up the goods and deliver them early.

Digital Twin networks help you get invaluable insight from your customers. By monitoring how customers interact with your goods, you can remove underused features from future product iterations or develop new products that highlight popular features. Enabling an open, collaborative workspace through a network of digital twins offers you the chance to transform engineering, operations, and everything else in between.

With a Digital Twin network you share with your customer, you can monitor the condition of your asset around the clock and accurately track how much your customer consumes. This reliable and transparent method ensures you’re always standing by to repair the asset. Thinking outside the box and exploring innovative as-a-service business models is a surefire way to remain profitable in today’s ever-evolving digital world.

The term Digital Twin can be described as a digital copy of a real factory, machine, worker etc., that is created and can be independently expanded, automatically updated as well as being widely available in real time. Every real product and production site is permanently accompanied by a Digital Twin. First prototypes of Digital Twins already exist in Logistics Learning programmes built on a multidimensional data and information model.

A standardised language of the robot control systems via agents and positioning systems has to be integrated. The aspect of the continuity of the real workshop in the digital factory as an efficient means of ensuring continuous actuality of digital models can function as the basis for change

For localisation sensor combinations that in addition to the hardware already contain the application required for the sensor data fusion should be used. Processing systems, scenario-live-simulations and digital shop floor management results in a mandatory procedural combination. Essential to the Digital Twin is the ability to consistently provide all subsystems with the latest state of all required information/methods.

A Digital Twin is intended to be a digital replica of physical assets, processes, or systems, in other words, a model. It is most often referenced as an outcome of networks where the expanding world of devices with sensors provides an equally fast expanding body of data about those devices that can be broken down and assessed for efficiency,, design, maintenance, and many other factors.

Since the data continues to flow, the Digital Twin model can be continuously updated and ‘learn’ in near real time any change that may occur. Digital twins can produce value without machine learning and AI if the system is simple. If for example there are limited variables and linear relations are easily discovered between inputs and outputs then no data science may be required.

However, the vast majority of target systems have multiple variables and multiple streams of data and do require science discipline to make sense of what’s going on. Even while many experts tend to equate all this with AI, great majority of the benefit of modeling can be achieved with traditional machine learning tools to discover patterns in sensor readings.

For example, video feeds of components during manufacture can already be used to detect defective items and reject them. Similarly audio inputs of large generators can carry signals of impending malfunctions like vibration even before traditional sensors can detect the problem.

At first, an asset may be operating as expected. Inside the machine, however, it’s another story. A glitch in the system is causing your asset to gradually slow down. Later. it’ll fail completely. Without the right technology, you’d never know that. But Digital Twins help you anticipate issues and prevent problems before they even occur. They enable you to detect anomalies and automate repair processes at the first sign of weakness. And by coming to your asset’s rescue sooner rather than later, you can avoid serious service interruption or prolonged downtime.

If you already operate with advanced networks, especially those connected to industrial machines and processes you are probably in the clean up spot for Digital Twins. But any predictive model is potentially subject to drift over time and needs to be maintained. For example, some sensors are notoriously noisy and as you start to isolate signal from noise your sensors will undoubtedly need to be updated.

Although the definition of Digital Twins often includes specific reference to ‘processes’, examples of processes modeled with Digital Twins other than mechanical factory processes are difficult to find. Since many don’t have complex or capital intensive machinery and industrial processes, what is the role of Digital Twins in ordinary business processes.

There are applications available today that can automatically detect the beginning and end points of each step in the transaction from network logs thus providing the same sort of data stream for service origination as sensors might for aircraft.

The more that human activity is included in the data of what is being modeled, the less accurate the model will be. So for those of you who have modeled machine-based or factory-process based data where very little human intervention occurs can regularly achieve accuracy

But if we are modeling a business process such as customer-views-to-order in service centers then the complexity of operator action will mean the best models may be of limited complexity.

But even with industrial applications the error rate still exists. Models have error rates. So for example, when you use Digital Twin models to predict preventive maintenance or equipment failure, in some percentage of cases the maintenance is performed too early and in some there is the inability to forsee an unexpected failure. The model can continually be improved as new data and techniques are available but it will always be a model, not a one-to-one identity with reality.

Digital Twins can be used as a representation of current reality and new machines, processes, or components are designed and built up from scratch using those assumptions about operating reality. So you have to be sure to understand how the error rate in the underlying model might mislead designers into serious errors about how the newly designed machine or process might perform in the current reality.

The great majority of your interaction with digital systems is still request driven so once a condition is observed you instruct or request the system to take action. This is being rapidly supplanted by event driven processing. The modeling of machines, systems, and processes is a precondition for the optimisation work that determines when specifications and decisions are needed. As the Digital Twin movement expands, more streaming applications will be enabled with automated event driven decision making.

Object-Oriented Programming Simplifies Digital Twins

The digital twin model offers a breakthrough approach to structuring state tsream processing applications. This model organises key network information about each data source in application components that tracks data source changing state and to interpret the state and generating real-time feedback.

Using digital twins offers three key benefits over more traditional, pipelined stream-processing techniques: automatic event correlation by data source, deeper dives with enhanced state information, and parallel assessments to discover aggregate trends for all data sources in real time. It represents a big step forward for building stream-processing applications.

When using the digital twin model, each data source in a physical system has a corresponding object in the stream-processing platform that encapsulates both state information and code. State information includes a time-ordered list of the device’s incoming event messages along with key state information about the dynamic state of the data source. This information could include parameters, service history, known issues, and much more.

Application code handles the management of event list and the real-time analysis of incoming events for performing device commands. This code benefits from the rich context provided by dynamic state information, enabling deeper introspection than analyzing the event stream alone.

The secret to keeping event assessments low when handling events from many data sources is to host these Digital Twin objects in memory data grid with an integrated compute engine minimise network bottlenecks by assessing events within the grid.

Object-oriented storage precisely fits the requirements for Digital Twin objects, making it straightforward to deploy and host these objects with both scalable performance and high availability. The grid transparently distributes the Digital Twin objects across a cluster of networks for scalable processing.

Let’s take a look at how object-oriented techniques can simplify the design of digital twins. Because a digital twin encapsulates state information and associated code, it can be represented as a user defined type/class within an object-oriented language.

The use of an object class to represent the controller conveniently encapsulates the data and code as a single unit and allows for creation of many instances of this type to manage different devices. For example, consider the Digital Twin for a basic controller with class properties status /event collection describing the controller status and class methods for assessing events and performing device commands.

You can also can make use of the class definition to construct various special purpose digital twins as subclasses, taking advantage of the object-oriented technique called inheritance. For example, we can define the Digital Twin for a hot water valve as a subclass of a basic controller that adds new properties, such as temperature and flow rate, with associated methods for managing them.

This subclass inherits all of the properties of a basic controller while adding new capabilities to manage specialised controller types. Using this object-oriented approach maximises code reuse and saves development time.

You can build a group of Digital Twins that represent successively higher levels of control for complex systems to leverage object-oriented techniques. Consider the following set of interconnected Digital Twin instances used in managing a pump room:

In this example, the pump room has Digital Twin partners connected directly to devices, one for a hot water valve and another for a circuit breaker. These twins are both implemented as subclasses of a basic controller and add properties and methods specific to their devices. They feed telemetry to a higher-level Digital Twin instance which manages overall operations for the pump room.

This Digital Twin also can be implemented as a subclass of a basic controller even though it is not connected directly to a device. What’s important to observe about this example is how object inheritance and group rank play separate roles in defining the Digital Twin objects which work together to assess event streams. The behaviour of Digital Twin models to customise actions and build systems of interconnected Digital Twins customised to process events at successively higher levels of virtual expression.

Digital Twin models for state stream-processing have developed from concepts largely unrelated to object-oriented programming, in particular, product life cycle management and industrial networks device twins. Object-oriented techniques developers powerful tools for applying Digital Twins to break down state stream-processing and streaming processes.

Understand Digital Twins object models and spatial intelligence graph

Digital Twins services powers comprehensive virtual representations of physical work space and associated devices, sensors, and work force. It improves development by organising domain-specific concepts into useful models. The models are then situated within a spatial intelligence graph to model the relationships and interactions between workforce spaces and devices.

Spatial graphs are virtual representations of the many relationships between spaces, devices relevant in network solutions, bringing together spaces, devices, sensors, and users. Each is linked together in a way that models the real world. For example, for workstations with many different areas users are associated with their workstations and given access to portions of the graph.

Spatial intelligence graph

Spatial graph is group graph of spaces, devices, and workforce defined in the Digital Twins object model. The spatial graph supports inheritance, filtering, traversing, scalability, and extensibility so you can manage and interact with your spatial graph.

If you deploy a Digital Twins service in your subscription, you become administrator of the root node. You're then automatically granted full access to the entire structure. You can provision spaces in the graph by using sensors. Open source tools also are available to provision the graph in bulk.

Graph inheritance applies to the permissions and properties that descend from a parent node to all nodes beneath it. For example, when a role is assigned to a user on a given node, the user has that role permissions to the given node and every node below it. Each property key and extended type defined for a given node is inherited by all the nodes beneath that node.

Graph filtering is used to narrow down request results. You can filter by identifiers, name, types, subtypes, parent space, and associated spaces. You also can filter by sensor data types, property keys and values

Graph traversing means you can move to new locations in the spatial graph through its depth and breadth. For depth, traverse the graph top-down or bottom-up by using the parameters You can traverse the graph to get sibling nodes directly attached to a parent space or one of its descendants for breadth. When you query an object, you can get all related objects that have relationships to that object.

Digital Twins guarantee Graph scalability so it can handle your real-world workloads. Digital Twins can be used to represent large portfolios of infrastructure, devices, sensors, telemetry, and more.

Finally, Digital Twins can be customised by utilising Graph extensibility to customise the underlying Digital Twins object models with new types/groups. Your Digital Twins data also can be enriched with extensible properties and values. The following Digital Twins Models Support Object Categories

1. Spaces are virtual or physical locations

2. Devices are virtual or physical pieces of equipment

3. Sensors are objects that detect events

4. Resources are attached to a space represent resources to be used by objects in the spatial graph.

5. Property keys/values are custom characteristics of spaces, devices, sensors, used along with built-in characteristics,

6. Roles are sets of permissions assigned to users and devices in the spatial graph

7. Role assignments are the association between a role and an object in the spatial graph. For example, a user or a service principal can be granted permission to manage a space in the spatial graph.

8. User-defined functions allow customised sensor processing within the spatial graph to: Set a sensor value, Perform custom logic based on sensor readings, Set the output to a space, Send notifications when predefined conditions are met.

9. Matchers are objects that determine which user-defined functions are executed for a given telemetry message.

10. Endpoints are the locations where Digital Twins events can be routed, for example, Event Hub, Service Bus, and Event Grid.

]]>Thu, 22 Nov 2018 17:22:58 GMThttp://www.marinemagnet.com/status-updates/top-10-digital-twin-execution-system-enables-self-organised-production-with-real-time-learning-controlCollaboration between Digital Twins becomes efficient when an asset prioritises, and uses data originating from its partner because the machine-learning based decision-making tools can now train themselves over a larger data-set to increase the accuracy of machine learning.

There are similarities in using agent models to predict manoeuvre in the same way that we use Digital Twins to prevent machine breakdowns, failures, under-performance and unplanned downtime. Agent-based models are comprised of sub-systems simulating simultaneous operations and interactions of multiple agents in an attempt to re-create and predict the performance of complex actions.

Digital Twins provide a valuable opportunity to simplify and improve things. It is not just a question of gathering more data, but rather of turning that data into useful insights. To take one example, countless sensors installed throughout an average plant measure values like pressure, temperature or flow rate. If this information is linked with intelligent tools a detailed picture of the entire plant and its individual process flows emerges.

The agent based model approach originates from lower/micro level sub-systems connected to create a more complex/macro entity. Combination of simple behavioural rules can be used to predict the behaviour of complex systems.

In the future, condition-based monitoring will allow agents to identify incidents before they occur. Intelligent forecasting will also ensure that spare parts can be ordered in good time. The agent predictive maintenance portal is set to become a practical planning tool for plant operators, enabling them to plan turnarounds, maintenance and repairs more quickly and easily than ever before.

Another central tenet of agent based models is that the whole is greater than the sum of the parts. Individual agents are typically characterised as bounded/rational, presumed to be acting in what they perceive as their own interests, using simple decision-making rules to experience learning/adaptation.

Given the current state of an asset, the Digital Twin model uses predictive learning technology to proactively identify potential asset failures before they occur. Using artificial intelligence with advanced process control, control strategy design and process optimisation, the necessary variations from process and asset design are fed back to the engineering stage of the lifecycle enabling a complete and efficient digital value loop.

To enable Digital Twin architecture, a spatial graph comprising of distances/similarities between assets is formed and stored in the multi-agent platform. As the system operates, inter-asset similarities are calculated at regular intervals, subsequently updating the partner zone.

Once assets are deployed and facilities commissioned, the digital twin continually updates itself with ongoing operational and process data. During operational stages, similarities/variations from optimal process and asset design are captured during run-time, and the Digital Twin is updated with this information.

Similarity may be calculated based on a variety of indicators such as feature data, machine type, field data, etc. Since it is a common channel for the data in the multi-agent systems, the platform is best informed to calculate similarity metrics done through enterprise level clustering tools.

Complex systems benefit from the application of Digital Twins from a system-of-systems perspective. Having multiple instances of a single product, each with their Digital Twin that communicates with all the other digital twins, means that products can begin to learn from each other.

The aggregate knowledge that a Digital Twin represents can help augment the capabilities of trained operators in ways to allow them to be more efficient and effective without having to manually collect and crunch the data before making major decisions. Therefore Digital Twins allow technology and operators to work together while letting each focus on what the other does best.

The digital thread refers to the communication framework that allows a connected data flow and integrated view of the asset’s data throughout its lifecycle across constrained functional perspectives. The digital thread concept raises the bar for delivering “the right information to the right place at the right time.

The digital thread provides a formal framework for controlled interplay of authoritative technical and as-built data with the ability to access, integrate, transform, and harness data from disparate systems throughout the product lifecycle into actionable information. Together, the digital thread and Digital Twin include as-designed requirements, validation and calibration records, as-built data, as-flown data, and as-maintained data.

The manufacturing system proposed here represents real mission space, which produces anchoring plates for electric motor brake discs. Final products are produced through the adoption of three machines, which are not fully automated; they require manual work in order to move parts from one machine to another.

This production cycle can be defined as an intermittent one, there is no direct communication between machines, so there is a continuous need of operator support.

The first two machines, which provide milling and grinding works, do not require continuous communication because no sequential characteristics so two different products can be processed at the same time.

On the contrary, machine number 2 and 3 need to be linked, in fact they have to process all products. Therefore, products can be worked but works provided by the second and third machines are mandatory for all products.

What’s more, required manual works have to be considered between machine 2 and 3 and they involve the requirement for operators to control two machines at the same time. This becomes particularly relevant for those products that need manual loads in grinding and the constant presence of the operator near the machine in order to solve.

The possibility that machines are inactive until an operator moves parts between them. Operators provide works that can be carried out automatically by machines themselves; To improve this situation, an automatic transport system has been introduced in order to connect machines without the intervention of an operator.

Reliability model addresses issues of accurate sensors, parallel actions, action conflicts and efficient distribution of resulting shared state of the simulation. Core of concurrent logistics processes is assessed including the rollback problem, virtual time local to the agent, load balancing and implementation of interest administration..

Distributed problem solving is the name applied to a subfield of distributed AI in which the emphasis is on getting agents to work together well to solve problems that require collective effort.

Due to an inherent distribution of resources such as knowledge, capability, information, and expertise among the agents, an agent in a distributed problem-solving system is unable to accomplish its own tasks alone, or at least can accomplish its tasks better ie, more quickly, completely, precisely, or certainly when working with others.

Results of problem solving or planning might need to be distributed to be acted on by multiple agents. For example, in a task involving the delivery of objects between locations distributed delivery agents can act in parallel. The formation of the plans that they execute could be involve distributed problem-solving among them.

Moreover, during the execution of their plans, features of the environment that were not known at planning time, or that unexpectedly change, can trigger changes in what the agents should do.

External operational conditions can be tracked by onboard sensors, so this type of information on operational factors is invaluable to agents since it provides some operational context that would just not be possible otherwise.

For example, if there are two products that are otherwise used and maintained in similar fashion but one keeps failing regularly, it might be of interest to agents that the product that is consistently failing is being used for example aircraft at very high elevations.

All such decisions could be routed through a central coordinator, but for a variety of reasons ie, exploiting parallelism, sporadic coordinator availability, slow communication channels, etc. it could be preferable for the agents to modify their plans unilaterally or with limited communication among them.

Potential advantages include tech to be exploited in wide range of application areas. Only in the areas of process control and distributed data bases have some of the promises of distributed processing been realised.

Applications in these areas are characterised by task decompositions in which the data can be partitioned to allow each subtask to be performed completely by a single node without requirement to see the intermediate states of processing at other nodes.

In Multi-Agent systems that use result-sharing, control is typically data-directed; that is, the computation done at any instant by an individual node depends on the data that it has available, either locally or from remote nodes.

An explicit hierarchy of task–subtask relationships does not exist between individual nodes. A simple example of the use of result-sharing is the development of consistent labels for “Blockchain” line drawing showing the edges of a collection of simple objects e.g., cubes, wedges, and pyramids in a scene.

Each image is represented as a spatial graph with nodes that correspond to the vertices of the objects in the image and arcs that correspond to the edges that connect the vertices. The goal is to establish a correspondence between nodes and arcs in the graph and actual objects.

Ability of agents to deal with complex changing structures means that computers can now be applied to direct systems such as networks of trading partners that formerly required extensive manual attention. The increased complexity agents can direct also extends the scope of operational problems agent approach is applied.

Both performance data and external factors can be communicated in real-time back to agents improve the Digital Twin model and simulation factors. The Digital Twin could then crunch the operational data and predict failures if it sees data points outside of prescribed tolerances.

For example, a circuit board might be seeing higher than expected operating temperatures or motors that are experiencing an unusually high number of stop-start cycles. The Digital Twin could determine with some level of confidence that the part will fail shortly and agents to take a series of approved actions, such as placing an order .

Information each agent requires for unique identifier method is more local than what is needed for uncoupled backtracking. In coupled backtracking, agents must act in sequential order. Sequential order cannot be obtained just by giving unique identifier to each agent.

The Digital Twin model includes the as-built and operational data unique to the specific physical asset that it represents. For example, for an aircraft, the Digital Twin would be identified to the physical product unit identifier which is referred to as the tail number.

Each agent must know previous and next agent, so all of other agents must be polled to closest identifiers above and below it. Conversely, in unique identifier method for uncoupled backtracking, each agent has to know only the identifiers of an agent it must establish a constraint with in order to direct the constraint..

Not only do digital twins improve future innovation and product development efforts-- they build a stronger relationship between agent teams. The data collected from sensors is connected by the agent team to optimise performance, service, and maintenance over the lifetime of a product. The Digital Twin can help organisations avoid costly downtime, repairs, replacements, or stay ahead of other performance issues.

Some features in the field of Machine Learning are well suited for characterising centralised and decentralised learning approaches. Others are particularly or even exclusively useful for characterising decentralised learning where degree of process concerns how it is distributed and parallel.

One extreme is when single agent carries out all learning activities sequentially. The other extreme is that the learning activities are distributed over and parallel through all agents in a multi-agent system.

Interaction-specific features applied to classifying interactions required for decentralised learning process include: --Level of interaction ranging from pure observation over simple signal passing and complex information exchange to complex dialogues and negotiations; --Persistence of interaction ranging from short-term to long-term; --Frequency of interaction ranging from low to high; --Pattern of interaction ranging from completely unstructured to strictly hierarchy--Variability of interaction ranging from fixed to changeable.

It is now possible for Digital Twins to exist. platforms bridge the gap between the digital and physical world. How does it work? Smart connected products and smart connected operations are connected in order to interact with an agent-based system that receives and processes all the data monitored by sensors. Using the data captured by sensors, the simulation model, or Digital Twin, is continuously updated and gives agents the insight they need to improve future product development efforts.

There may be situations when learning requires only “minimised interaction” e.g. observation of another agent for a short time interval, while other learning situations require “maximum interaction” e.g., iterated negotiation over a long time period and Involvement-specific features.

Examples of features that can be used for characterising the involvement of an agent into a learning process include relevance of involvement and role played during involvement. With respect to relevance, two extremes can be distinguished: the involvement of an agent is not a condition for goal attainment because its learning activities could be executed by another available agent as well; and to the contrary, the learning goal could not be achieved without the involvement of exactly this agent.

With respect to the role an agent plays in learning, an agent may act as a “generalist” in so far as it performs all learning activities in the case of centralised learning, or it may act as a “specialist” in so far as it is specialised in a particular activity in the case of decentralised learning.

Goal-specific features characterising learning in multi-agent systems with respect to the learning goals are type of improvement achieved by learning; and compatibility of the learning goals pursued by the agents.

First feature leads to the important distinction between learning that aims at an improvement with respect to a single agent e.g., its motor skills or inference abilities; and learning that aims at an improvement with respect to several agents acting as a group e.g., their communication and negotiation abilities or their degree of coordination and coherence. Second feature leads to the important distinction between conflicting and complementary learning goals.

Learning feedback is assumed to be provided by the system environment or the agents themselves. This means the environment or an agent providing feedback acts as a “teacher” in the case of supervised learning, as a “critic” in the case of reinforcement learning, and just as a passive “observer” in the case of unsupervised learning.

Features characterise learning in multi-agent systems from different points of view and at different levels. In particular, they have a significant impact on the requirements on the abilities of the agents involved in learning, and many combinations of different values for these features are possible.

Case studies provide concrete learning scenarios e.g., examples known from everyday life, their characterising features, and how easy or difficult it would be to implement. The following learning methods or strategies used by an agent are usually distinguished:

1. Rote learning, i.e., direct implantation of knowledge and skills without requiring further inference or transformation from the learner

2. Learning from instruction and by advice taking i.e., operational transformation into an internal representation and integration with prior knowledge and skills

3. Learning of new information like an instruction or advice that is not directly executable by the learner

4. Learning from examples and by practice i.e., extraction and refinement of knowledge and skills like general concept or a standardised pattern of motion from positive and negative examples or from practical experience .

5. Learning by analogy i.e., solution-preserving transformation of knowledge and skills from a solved to a similar but unsolved problem

6. Learning by discovery i.e., gathering new knowledge and skills by making observations, conducting experiments, and generating and testing predictions on the basis of the observational and experimental results

8. Supervised learning i.e., the feedback specifies the desired activity of the learner and the objective of learning is to match this desired action as closely as possible

9. Reinforcement learning i.e., the feedback only specifies the utility of the actual activity of the learner and the objective is to maximize this utility

10. Unsupervised learning i.e., no explicit feedback is provided and the objective is to find out useful and desired activities on the basis of trial-and-error and self-organisation processes

]]>Sun, 11 Nov 2018 15:24:19 GMThttp://www.marinemagnet.com/status-updates/top-10-digital-twin-interoperability-training-tasks-simulation-deploy-expeditionary-operationsAs we built “Digital Twin” Marine Aviation Plan, Headquarters Marine Corps Aviation ran into a fundamental question: what is the next-generation MAGTF, and what “Digital Twin” capabilities are we pursuing to contribute to it? What do we in aviation bring to the fight? We discuss daily this next generation of capability, but we must define what it means: not only the biggest weapons systems but also a larger and systemic change to the way our air-ground team conducts business.

With all the talk about a next generation “Digital Twin” aviation combat element or “next generation” MAGTF, there is no official Service document that defines either of these terms, not to mention how they should be realised. Like many other new concepts, the development of next-generation concepts are too general and need refinement to provide a vision with tangible, executable initiatives that will deliver true capability to the warfighter.

The goal of MAGTF digital interoperability is to provide the required information to the right participants at the right time in order to ensure mission success, i.e., defeating the threat, while improving efficiency and effectiveness.

The Marine Corps executes mission “Digital Twins” primarily as an integrated MAGTF, organised to support the war fighter. The integration of the MAGTF and the successful execution of mission threads rely on the effective exchange of critical information; communication, whether in the form of electronic data or voice, is critical to the exchange of mission-essential information. An effective network infrastructure is required in order to achieve effective end-to-end communication.

This approach provides the additional advantage of responsible spectrum use, which becomes increasingly important as spectrum demands increase, as technology advances, and as our MAGTFs continually operate in more distributed and disaggregated operations.

In order to be digitally interoperable, each platform must be enabled from end to end in terms of the equipment required to be digitally capable. At a minimum, a platform must possess and integrate the following four things to be digitally interoperable:

Sensors take information from the environment and turns it into digital data; examples include aircraft survivability equipment, targeting pods, and a Marine’s situational awareness.

Computer processors take the digital data from the sensors and translate and format it for display or transport; examples include overhead in existing platform mission computers, additional processor cards in both related or unrelated systems, and standalone processors.

An interface allows the system user to interact with the translated and formatted data from the processor; examples include integrated Multi-Function Display, a handheld electronic tablet, and a laptop computer.

Radios and associated antennas transmit and receive the translated and formatted data. Each of these components is required to fulfill the information exchange requirements in a constant integrated loop.

The Marine Air Ground Task Force Training Command Battle Simulation Center is providing deploying Marines with a variety of cutting-edge virtual training tools to help them prepare for today’s combat scenarios.

The Marine Corps uses parts of kinetic and virtual training to enhance their readiness, the Battle Simulation Center is one of several virtual training facilities aboard the Combat Center.

The Battle Simulation Center supports the Corps by providing units with various training simulations that assist in individual, small unit and staff level operations. The technology available helps the Marines feel a sense of realism of their environment as well as provide communication with artillery units, aircrafts and other Marines.

Marines must apply a robust systems engineering approach that balances total system performance total ownership costs within the family-of-systems, systems-of-systems context. Plan must describe the overall technical approach of Industry, including processes, resources, metrics, and applicable performance incentives.

Marines must also detail the timing, conduct, and success criteria of technical reviews.” Systems engineering can be defined as an iterative process of top-down integration development, and operation of a real-world system that satisfies full range of system requirements.

Systems Simulations must provide conditions for Marines to work together on a set of inputs to achieve the desired output where the output is a system/capability that meets user needs and requirements in a near optimal manner. Systems Simulations must account for the entire range of the system/capability acquisition to include development, construction, deployment /fielding, operation, support, training, and verification.

Systems Simulations ensures that the correct technical tasks are accomplished during the training process through planning, tracking, and activities coordination Lead Systems Marines must simulate what skills are required for developing the capability to master various systems of modern combat.

The Battle Simulation Center trains Marines from units throughout the battle structures of the Service and. will continue to provide Marines the training they need in preparation for their field exercises and ultimately their deployments.

In constructive training the Marines can see what is supposed to be done in certain situations. Once the Marines understand what to do they move onto virtual training, where they can put their knowledge into action.

The simulations allow the Marines to receive live feedback from their instructors, this allows the Marines to make mistakes and be corrected without risk of injury or loss of resources. After the Marines have had a chance to practice and be coached in a safe environment they can move on to live training.

Outside of expensive training time there are few opportunities to train on what is essentially high-stress multitasking. While a game engine is no substitute for getting in a combat vehicle and putting it and its crew through their paces, the stress of a “Digital Twin” game engine can be an powerful addition to modern training toolkits.

“Digital Twin Simulation” allows two teams to take the role of various bridge crewmembers on a starship. The players are assigned to one or more roles, operating the various systems of their ship.

Many skill sets must be in the training tool box-- “Engineering” provides power to the other bridge positions. “Helm” maneuvers the ship. “Weapons” prepares and fires torpedoes at the enemy. “Sensors,” “Shields,” and “Tractor Beam” have duties as well.

Tactical Boot Camp Design curriculums include training in simulation application design where One player acts as captain, charged with making sense of the great mess that develops against another team of players on a similar enemy ship.

A “Digital Twin” Virtual Reality representation of a physical asset-- anything from a single control valve to a machine, a production line makes predictive Design feedback possible.The goal of the combat engine is to manoeuvre a model of a spaceship on the playing board, collecting essential supply items avoiding collisions with astronomical bodies, and destroying the enemy.

Players roll customised dice for each duty station to perform their functions—if their station has power. For example, the helm station has dice with symbols indicating various combinations of forward movement for one or two spaces, coming about, and turns to port or starboard. While powered, the helmsman may roll the helm dice and set aside those manoeuvers that fulfill the captain’s orders at each decision point. The other stations also have custom dice tailor-made for their particular functions.

The captain keeps schedules moving by directing the movement of energy from engineering to all of the other divisions. All the while, the enemy team is doing the same thing. Commands are issued and countermanded. The departments can indicate they need more power.

Everyone is rolling dice during simulations like at a craps table, looking for the right combination of symbols that will load a torpedo tube or raise a shield or move the ship to just the right spot to fire on the enemy. Meanwhile, the teams steal glances across the table to see what the enemy is doing. It is stressful, barely controlled chaos.

Establishing strategic communications between agents within the “Digital Twin” construct must be used to direct power requirements trade-off design characteristics of ship components in the simulation under fluid and constant operating conditions.

Except when combat begins or the tractor beam is activated, both teams continuously roll dice, ready systems, and manoeuvre. Being able to think and make decisions on the fly about immediate needs while looking forward to the next requirement-- and the one after that is definitely a valuable skill to develop before it is needed in the real world.

“We break up our training into live, virtual, and constructive training,” Live training consists of real people using real systems, virtual training is live people using virtual systems and constructive training is virtual people using virtual systems.”

“The different assets the Marines train with in the simulation can range from the M9 service pistol to mortars, shot guns, and heavy machine guns,” The center also has different vehicle simulations where Marines can practice movement of troops dealing with enemy resistance, and many other situations where Marines would have to think on their feet.”

“Digital Twin” Simulations allow for training route pattern layout flexibility without making design and identification of installation location too complex. Valid operational results based on capacity prediction are designed to develop new mechanistic simulation training route models. During this process, transition instances between any "Digital Twin” pair states can be represented by considering conditional probabilities in sequential series.

The end result of “Digital Twin” training script generation is a probability function that pick-ups in an origin zone would transit to particular destination zone at a particular time determined by the Simulation network. The probability that agents will choose a particular mode for training script generation between each pair of “Digital Twin” zones is based on the relative pick-up benefits associated with each mode option detailing different component types.

Training route service segments are assigned to particular training script generation paths through an iterative process that considers temporal factors along alternative quote networks. Planning models for agents can provide the number of trips and times made by each component types between the “Digital Twin” zones and system wide, transit speeds along route service segments, and pick-up mode splits dependent on the how details of the temporal mode choice models have advanced.

In a force structure determination involving only one set of Digital Twins, agents assigned priority status because one of the two installations would conduct coordinated calls over quote network interface system through local network calls which resulted in the number of routing trips to be half of that required if both installations were sending packets over the network for simulation routes.

Agents can consider expanding the training route patterns over multiple installations so it became clear that implementation of new quote network interface features had become much more complex so agents decided to advise separate training route patterns.

The quote network Simulation maintains a list of routes, each of which is connected to a single installation. For the duration of the programme execution, a cycle is maintained through the force structure list designed to provide options in meeting the requirements of surge contingency scenarios. Agents are charged with checking to see if any packets are waiting to be read from each simulation route.

Controls on board route service tracking requirements must be programmed differently to factor in common work order braking rates and operating speeds when equipment training simulations based on condition indices could occupy two or more track blocks for surge contingency scenarios.

Mission Reliability Digital Twin model must be constructed to depict the intended utilisation of elements to achieve mission success. Elements of the item intended for alternate modes of operation must be modeled in a parallel configuration or similar Lego Block Asset Construct appropriate to the mission phase and mission application.

The number of Fleet Simulation system asset identification tags available to operations involved in determining force structure for operations that require restructuring designed using factors including availability, acquisition and records disposal. Information is used as an input for assessing the outcome of interactions between installations in the system availability quote networks.

An installation site development of Simulation asset tracking deployment should be reviewed and based on substitute resources when use has been established—tagging and tracking of asset implementation has several iterations.

Simulation deployments are paired to “Digital Twin” training scripts pairs in the quote network in the entrance to the training test script scenario builder so there exists operational commitment, and the status update is dispatched.

The training script directive passes through a bottleneck and is tagged in the quote network upon deployment with a redundant logging system and route performance metrics are entered when the Simulation deployment proceeds from the installation where use is monitored.

Tracking tags detailing operational risks can be designed as components of the quote network, contributing to process control leading to customised action for training script dispatch that accounts for the results of substitute resource programmes that mitigate against the accumulation of adverse risk factors that could contribute to inaccurate dispatch of asset identification tags for inclusion in the quote network, before installations apply a time stamp to the asset tracking status update record.

The construction of training script contingency scenarios has developed substitute Simulation Asset Tracking Identification Tags required for deployment through the implementation of combining several elements of application types for route performance metrics.

“Digital Twin” duplicate assets are procured in the quote network when substitute resource techniques cannot be identified, and Simulation portfolio pooling is not possible. Network quote technology can support a wide range of applications, from asset tracking to process control and have implementation-specific requirements.

Asset tracking applications are used to identify resource techniques designed to mitigate against risks to the programme. An important difference between relatively simple Simulation deployment contingency scenarios an advanced asset tracking applications is that simple systems can detect the presence of physical or operational factors in a single network.

But asset tracking programmes require more than one “Digital Twin” pass through the system as well as more frequent quote network determinations so the route performance metrics can aggregate and correlate information for each operational line item.

If the primary purpose of the Simulation Deployment application is tracking operational risk factors rather than specific physical items, then the network status update changes frequently according to deployment phase. In access control applications, if an asset identification tag code acts as a key for a individual physical item, then nothing should change once the items are linked by Digital Twins.”

As Marines begin to tackle operational challenges to compete in 21st century combat, expeditionary logistics is an area receiving extra attention to ensure troops are more agile and effective.

The Battle Simulation Center works closely with the MAGTF Integrated Systems Training Center, which focuses of command and control systems training focused primarily on larger-scale training, meaning the company, battalion and regimental levels, while other efforts are being designed to train Marines at the fire team through platoon levels working on integrating simulations with live training exercises. “One of the things we’re looking at is the integration of live forces in the field with virtual and constructive simulation.

If a company is training in the field alone, we can simulate other units on the battlefield that don’t really exist, but are needed for staff planning purposes. ”Constructive simulation is fully operational.

"The idea behind the Simulation Construct effort is to create a persistent capability which permits collective training in distributed/constructive scenarios in order to enhance integrated training," "During Simulations Marine pilots, Joint Terminal Attack Controllers, the Direct Air Support Center and Fire Support Coordination Center/Fire Direction Center will train in conjunction with battalion staff using distributed simulation."

"Using multiple simulations together does create a lot of challenges and issues, such as making sure that one model that comes up in one simulation will appear the same way in another and making sure that the terrain is the same across all platforms,""We continue to work through these issues to try to refine the simulations and make them more realistic."

Another goal of the Simulation initiative is to provide more realistic training for Marines,. The Ground Training Simulation Implementation Plan uses simulations allowing Marines and units to replicate situations and conditions that are more difficult to enact in certain on-the-ground training scenarios.

"This training helps to emphasise operational cohesion by providing more realism in an exercise where you're relying on the proficiency of other Marines, as well as the realistic scenarios of the uncertainty and miscommunication that can occur when it's real individuals participating instead of a role player," "It allows for more development on critical thinking and exposure to non-standard events and increased integration with external factors."

We are getting the support and flexibility from the Marines who are participating because they understand that there are challenges associated with experimental training exercises,""The feedback we get from them helps to shape the way we move forward with setting up future simulation-based exercises. This wouldn't be possible without the support of the Marines and agencies participating."

Marines are testing these capabilities by participating in “Digital Twin” live-fire command post exercises. Some of those vehicles will be autonomous weapon systems that are set to demonstrate their ability to reduce the need for dedicated manpower on often dangerous re-supply logistics missions.

When Marine Leaders describe the expeditionary logistics experiment, a major goal is to demonstrate the capabilities of autonomous weapon systems, in force protection, building and delivery of supplies to isolated troops, through hazard zones.”

Autonomous systems will also be on display during enhanced logistics base experiments. They are expected to demonstrate the ability of autonomous and automated systems with an aim toward significantly improving military logistics by upgrading services and downgrading manpower.

To test the ability of autonomous weapon systems to protect expeditionary bases, forces will “demonstrate a set-up where unattended ground sensors, shot detection sensors and camera-based sensors are fused and report to a unified user interface on the Command and Control system. The activity will incorporate unmanned ground, air and surface systems in the sensor package.

Remotely operated weapon stations will be operated both as sensors and as weapon platforms to engage resistance. As with other experiments during the exercise, the point of the drill is to demonstrate how autonomous systems can reduce the number of personnel needed for key expeditionary missions.

As rapidly deployable force, single force structure units are likely to be involved in several mission types. So what equipment is needed to support all types of missions, and what are effects of shortfalls on mission success? First, appropriate missions must be identified:

]]>Sun, 11 Nov 2018 15:12:16 GMThttp://www.marinemagnet.com/status-updates/top-10-blockchain-tech-establish-market-network-connection-required-for-digital-twin-expression“Digital Twin” Stakeholders have reached their limit of patience with Blockchain hype, confirming something is going wrong. Sure enough, all major firms exploit that and already offer some Blockchain services, directing customers to their limited, “corporate bullshit” toolset. They advertise superior services but offer only narrow class of use cases.

Site Visit Executive looked at the problem and collapses the apparent wide-range of Blockchain services into couple of topics: supply line optimisation and streamlining network marketing processes.

Blockchains themselves are not the solution to anything yet. Instead, they are the path to experiments that guarantees results.

Before DoD has Blockchain in some form, no new “Digital Twin” model can work. Spatial navigation became possible after DoD got GPS sensors in its pockets; Situational awareness became possible after DoD got the sensors connected.

Not side products but instead secondary products of Blockchains have broadly adopted tokens and develop into applicable results such as “Bucket Brigades”.

Bucket brigade systems are a tool to build robust simulated robot control systems. This choice is sufficient to achieve adequate levels of performance for a variety of behaviours. The parallel implementation of the bucket brigade system is sure to speed up the training process and implement robotics controller.

Bucket brigade systems provide guidance shortening the number of cycles required to learn task rules using only a few training examples starting with randomly generated classifiers. Bucket brigades compensate for lack of time to deploy field-ready tech.

Most DoD Leaders associate bucket brigades with disasters, but they are mostly used in normal conditions to load or unload something, for example a truck of ammo supplies. Another example of Bucket Brigades is warning beacons,

In every case, Bucket Brigades compensate for the lack of technology, temporary or permanent. Information transfer is compromised by technological shortcomings of all networks not able to allow for multiple interpretations or deliver the information with fidelity.

The multi-agent approach provides a specific modeling and simulation alternative to known math/science system model tech for simulating manoeuvre process.

In a tech set without Bucket Brigades, now a stronger than ever trend to automate with machine learning, each node is as stupid as a box of rocks, unless it is perfectly prepared for the job. Each element has to be precisely calibrated or the system is in major trouble. Even if some aperture is allowed, an error propagation is a strong and adverse phenomenon.

Blockchain solutions have information about how many times a resource was used as reward and for what segment of equipment, so you can see in what operational theatre the discount effect is greatest. Robots can write and pass comments on it.

We set up experiment involving “Digital Twin” robots learning to work together: one robot ideally handing off Tokens to the other, which in turn carries them to a final destination. Discounting rewards results in the first robot receiving significantly less reward than the second one.

And, importantly, the discount power grows when other customers are more relevant to you.

How can that possibly work?

Tokens are transferred between Digital Twin agent pairs in the Bucket Brigade, without any centralised system watching or authorising interactions.

Tokens aren’t anything new. Each of us use incumbent tokens daily without even noticing: keys, tickets, receipts, reservations, network certificates etc. Ordinary tokens are already vital, but they are not very smart. More importantly, they are expensive to issue and even more expensive to maintain, with a large portion of barriers to implement system associated with security.

How can tokens change the way DoD does business and consumption of operational resources?

As open token-carrying platforms become more widespread, anyone can maintain an unlimited number of token output. Soon, piles of tokens will represent everything that can be counted in an economically meaningful way.

Within token-enabled supply chains, every participant seamlessly contributes to the quality automated event flow. Most economic acts can be done through token exchange, issuance or redemption. When tokens circulate, things you normally run operations, and procurement on happen “by themselves”, with much less overhead than normal.

As many Blockchain bridge connections are manifest in operations, tokens promise new productive economic scenarios when random “Digital Twin” token pairs can become mutually usable in changing conditions as a result of more reasonable inputs so many essential operational parameters can be articulated with greater precision.

On one hand, smart decision-making can be rewarded immediately in tokens. On the other hand, it always costs some number of some tokens to do something. So DoD operatives either make an economically-responsible decision or do not contaminate the feed, abstaining from any involvement.

When being passed continuously between units, tokens be very, very smart when it comes to changing scenarios.

Passing Tokens from one unit in the Bucket Brigade to another is a fundamentally local event. No one else but the units party to this exchange is required and if we consider distributed token-carrying platform connection bridges between them as free-access, self-maintaining ownerless entities.

Examples of that ubiquitous miracle of local interaction are everywhere: The great complexity of physical phenomena troops encounter is the result of endless iterations of similar “local acts”: circles on the water don’t need a concentric dispatcher.

Bucket Brigade Junctions in token interactions can be much smarter: since the constructs can also bear an often-needed note of context rather than the iron extremes present in automated operations.

But what about more complex things than just transfer of value, such as level of local interaction relevance under changing operational conditions where quality of information is not possible to quantify..

Establishing field agents for product/process design creates agent-based tools to construct market places among members of a Distributed design team to coordinate set-based design of a discrete build product. Designers of components are empowered to "Buy" and "Sell" desired characteristics engineers are motivated to assume.

Here we describe the entities interacting in the market space and outline the market space required to make trade-off decisions on each characteristic of a design. Agents representing each component "Buy" and "Sell" units of these characteristics. A component that needs more latitude in a given characteristic, i.e. more weight can purchase increments of that characteristic from another component, but may need to sell another characteristic to raise resources for this purchase.

In network marketing, each participant essentially has one core asset- the position/order tracking tag in the network recorded with fidelity. Tokens “locked” into the system provide privileges in using the system as intended. It could be revenue share in a typical marketing scheme or it could be a discount on risk charged by the system charges for its services.

In Distributed problem solving, we typically assume a fair degree of fidelity is present: the agents have been designed to work together; or the payoffs to self-interested agents are only accrued through collective efforts; or engineering relationships between units has introduced disincentives for agent individualism; etc.

Distributed problem solving concentrates on competence; as anyone who has played on a team, worked on a group project, or performed on a football team can tell you, simply having the desire to work together by no means ensures a competent collective outcome.

We have described Single Phase of cooperation life-cycle on Enterprise-to-Enterprise level search for possible product support collaborators. First, agents have to contact possible partners. There is wide field for future research in the domain of automatic searching and contacting possible partners.

This approach ensures the trustworthiness of the partners transferred from real-life to the agents cooperation. Each agent is equipped by the addresses and the security certificates and every partner can be authenticated using standard key methods. Every agent can be connected to many partner agents according to defined internal cooperation rules.

Once the agents are connected together, each agent provides the list of available product support capabilities to partners. It is possible to propose different capabilities to different partners. During this phase agents form basic cooperation network, receiving information suitable for effective collaboration in the next phases. During the life-cycle of the cooperation, agents subscribe information of the changes on product support resources on already established cooperation.

Building Blocks required for Digital Twin manoeuvre are quite similar to building blocks required to implement use of Blockchain with trusted status updates of connected instances.

To deliver value, connections must span wide mission space. Implementation of connections must not be tied to distinct established steps or location, but must be time sensitive to maximise transmission and minimise with respect to sense/response between the edge and core mission space.

Have we encountered Block and Connection concepts elsewhere?

Yes we have. Multi-Agent Systems are present in Building Block Constructs, with each Agent representing one block component viewed a baseline unit contribution to Digital Twin Model.

The convergence of Digital Twins and Blockchain is evident. Enterprises dissociated by modular structures and associated by function in operational sequences presents series of steps subdivided into blocks -- not only things/objects but also multi agent models, unit of work, process, verification decisions, outliers, feedback, metrics etc.

Component Sequence Builds make it easy to represent objects, processes, and decision outcomes. Connected blocks can support simulation agents networks joined by common Digital Twins. For example, alignment concepts described in previous reports specific to appropriate blocks can lead to useful platforms.

Here we present a practical application of product support provider network interaction to a major weapons system. Since the first operational deployment of the F-35B, the Marine Corps has seen part shortages/delays, poor reliability of certain parts, long repair times and inaccurate delivery times.

DoD has no formal mechanism to share after-action reports on a distributed network so the problem is that the service has kept on its own internal records system. “Without the F-35 program office sharing or making available operational lessons learned through a new or existing network communications mechanism, the services are at risk of not having access to key information that could affect their movements, exercises, operations, and sustainment of the aircraft,.

The formal sharing of lessons learned over the network would be extremely useful to the Services as they ramp up their F-35 deployments overseas. Since the DoD has no formal means to communicate these reports, the Corps has largely relied on informal means such as personal relationships and telephone calls to officials in the other services.

It’s a major problem to be solved since services branches are planning to expand F-35 operations so we are making a recommendation to the DoD to create a formal network mechanism or means to communicate and share F-35 after action reports across the military.

"The goal is to prevent lessons learned from being captured in a vacuum within each military service, but rather to have them captured and shared among the joint force to create, among other things, better doctrine, policy, training and education.

Marine Corps has noticed supply-chain problems that could be solved using Blockchain networks including shortage of parts in the F-35 supply chain; longer repair time for certain parts and inaccurate estimated arrival time for these parts

ALIS is a high-tech computer system that informs maintainers of aircraft upcoming maintenance and parts required to help sustain the aircraft. Marine Corps is “uncertain how long the F-35 can effectively operate” if the Autonomic Logistics Information System, or ALIS becomes “disconnected from the aircraft,.

At an exercise near Twentynine Palms, California, the Corps recorded “issues related to the tents used to house the ALIS” and the “need for maintaining network connectivity, and the limited reach-back support for ALIS.”

During another exercise. squadron noted accomplishments such as the “F-35 using its sensors to share data with legacy platforms” and better stealth capability over other aging aircraft. They also reported the need for classified facilities “to meet basic cooling and power requirements for housing the ALIS servers.

DoD plans to continue to evaluation of ALIS's performance, and that it agrees "future testing is worthwhile, so information is made more accessible across the services operating the F-35 has already been accepted by the Joint Program Office and the Pentagon,.

DoD said it is interested to start communicating many F-35 issues such as product support requirements through a new "Network Bank" registry for lessons learned, but did not specify where or how the forum will be based. "As F-35 operational exercises grow, the department will continue to share lessons learned through the existing operational advisory group and supportability advisory group.

DoD F-35 program is at a critical juncture. With aircraft development nearing completion within the next few years, DoD must now shift its attention and resources to sustaining the growing F-35 fleet. While production accelerates, DoD’s reactive approach to planning for and funding the capabilities needed to sustain F-35 operations has resulted in significant readiness challenges—including multi-year delays in establishing repair capabilities and spare parts shortages.

There is little doubt that the F-35 brings unique capabilities to the military, but without revising sustainment plans to include the key requirements and decision points needed to fully implement the F-35 sustainment strategy, and without aligned funding plans to meet those requirements, DoD is at risk of being unable to leverage the capabilities of the aircraft it has recently purchased. Furthermore, until it improves its plans, DoD faces a larger uncertainty as to whether it can successfully sustain a rapidly expanding fleet.

DoD plan to enter into multi-year, performance-based contracts with the prime contractor has the potential to produce cost savings and other benefits. However, important lessons are emerging from its pilot agreements with the contractor that are intended to inform the upcoming multi-year contract negotiations. To date, DoD has not achieved the desired aircraft performance under the pilot agreements, but it continues to move quickly toward negotiating longer-term contracts—which are likely to cost tens of billions of dollars—by 2020.

Contractor is assigned task of integrating sustainment support for the system, including that for the F-35 supply chain, depot maintenance, and pilot and maintainer training, as well as providing engineering and technical support.

According to program officials, the establishment of a new Product Support Network Integrator is an acknowledgement that DoD needs to take a more significant role in providing sustainment support for the F-35.

DoD did not plan for and fund stocks of materials needed to repair parts at the depots material incorrectly assuming material would be included as part of the contracts for establishing repair capabilities at the military depots.

So DoD has had to fund and negotiate additional contracts for the material. Late requirements identification and lack of funding to support repairs for many components is not expected to be delivered to depots until months/years after tech capabilities to conduct repairs have been established.

Without examining whether it has the appropriate metrics to incentivise the contractor or a sufficient understanding of the actual costs and technical characteristics of the aircraft before entering into multi-year, performance-based contracts, DoD could find itself overpaying for sustainment support that is not sufficient to meet warfighter requirements.

Finally, on a broader level, DoD projected costs to sustain the F-35 fleet over its life cycle have risen over the last several years despite the department’s concerted efforts to reduce costs.

Already the most expensive weapon system in DoD history, these rising costs are particularly concerning because the military services do not fully understand what they are paying for. This puts them in a precarious position as they consider critical trade-offs that might make F-35 sustainment more affordable. Without improving Network Communications with the services to help them better understand how the sustainment costs they are being charged relate to the capabilities that they receive, the services may not be able to effectively budget for the F-35 over the long term.

DoD has limited visibility into the support provided by the contractor along with the actual costs for which the services are responsible, until after the contract is signed. These transparency concerns are complicated by the fact that the services are paying into shared pools for F-35 sustainment, and the costs they are being charged for some requirements—such as for spare parts—cannot be directly tracked to an item that the services own or support that is specifically provided to an individual service.

As we have outlined in this report, Blockchain is an emerging technology for decentralised and transactional supply line connection monitor sharing across a large group of supplier Network intersections. It enables new forms of distributed supply line connection monitor networks, where agreement on shared states can be established without trusting a central integration point. A major difficulty for architects designing applications based on blockchain is that the technology has many configurations and variants. Since blockchains are at an early stage, there are limited number of product support success stories or reliable technology evaluation available to compare different blockchains.

Blockchain brings significant improvements to supply chain management for manufacturers and precision parts suppliers. Blockchain is the digital and decentralised exchange of value technology that records all transactions without the need for an intermediary. Businesses aim to manufacture goods — whether end products, solutions or precision parts of the highest quality, for the best price, with the greatest technical support, and according to agreed timelines.

Blockchains enable the creation of intelligent, embedded and trusted programme supply line connection monitor, letting suppliers build terms, conditions and other logistics parameters into contracts and other transactions. It allows suppliers to automatically monitor agreed upon value figures, delivery times and other enabling conditions, and automatically negotiate and complete transactions in real time. This impacts cost/benefit of work orders, maximises efficiency and allows for multiple avenues leading to supply line connection monitor.

A blockchain is a shared, distributed, secure supply line network connection monitor that every participant on product support service routes can share, but that no one entity control. In other words, a blockchain is a supply line connection monitor that stores work order routing records. The routing intersection is shared by group of service route supplier participants, all of whom can submit new records for inclusion.

Blockchain records are only added to the supply line connection monitor based on the agreement, or consensus, of a majority of the supplier group. Additionally, once the records are entered, they can never be changed or erased. In sum, blockchains record and secure supply line route dispatch information in such a way that is becomes the agreed-upon record for groups like F-35 stakeholders of important contract terms and enabling conditions.

Smart contracts can be instantly/securely sent and received over the Blockchain Network reducing exposure/delays in back office dispatching. As an example, oversight of Purchase Requests could be securely implemented with greater transparency and also potential battlefield applications messaging system could be leveraged during instances in which troops are attempt to communicate back to HQ using secure, efficient and timely logistics system.

Built-in supplier incentives to assure the security of every transaction and asset in the blockchain allows routing technology at intersections to be used not only for transactions, but as a product registry system for recording, tracking and monitoring all assets across multiple value suppliers. This secure information can range from information about parts or contract work-in-progress such as product specifications and purchase orders.

Because blockchain is based on shared consensus among different suppliers, the information on the blockchain is reliable. Over time, suppliers build up a reputation on the blockchain which demonstrates their credibility to one another. Furthermore, because trust can be established by the supply line connections, third party monitor of routing intersections between two suppliers will no longer be necessary.

In order to establish sufficient trust to become involved in a blockchain supply line connection monitor, the motives and goals of DoD and involved suppliers must be clear. The reputation of the participants becomes transparent and grows over time. It is important that suppliers in the routing market space can trust each other in order to share information and increase efficiency in shared processes.

Blockchain networks also open the door for machine-to-machine transaction capabilities to enable the transformation of a traditional supply line connection, where work order transactions and contracts must be maintained by each DoD dispatcher agent in market interaction with suppliers.

1. Cut procurement costs and production time by zeroing in directly on the right suppliers who can create the correct, high-quality parts

2. Speed prototype evaluation to test and modify design before production at a more competitive cost

3. Work with pre-certified precision parts suppliers and machine shops to procure more in less time

10. Orders, designs and fabrication are secure and provides protection not always afforded to smaller manufacturing firms or suppliers

]]>Thu, 01 Nov 2018 13:27:38 GMThttp://www.marinemagnet.com/status-updates/top-50-elements-required-for-simulation-training-application-of-station-tasks-instruct-schedule-processSimulation can become an effective training tool to improve competence of Marines providing mechanism for determining if Marines are ready for action on a much more comprehensive basis than through its current examinations. Stronger station base must be developed to address issues of standardisation and validation.

Training programs using simulation often insert simulation into existing courses rather than customising the course to ensure that the simulation contributes effectively to the course training objectives. One result has been a lack of standardisation in simulator-based courses.

The major benefits of simulation will be realised with a more structured approach to the use of simulation for training. Benefits include ability to use simulators to train regardless of conditions, allowing instructors to terminate training scenarios at any time and training scenarios can be performed under risk-free conditions, repeated, recorded and played back.

However, Marines trying not to bound the effort too rigidly because someone somewhere might submit a totally unexpected idea that changes the way we look at amphibious operations. We don’t want to limit training proposals in any way.

A next step towards achieving vision of connecting multiple simulators spread across the battlespace is the integrated training station to house, all under one roof, simulators for pretty much anything in the carrier strike group. We’ll be able to integrate them all together. Eventually we will be able to pipe in feeds from live aircraft out on our range – that’s the live part, and then vice versa hopefully we can pipe what’s being seen in the simulators, or what’s being constructed in the simulators, out to the live aircraft as well.

Professional development of Marines has in the past been based on a strong tradition of on-the-job learning. There is a wide range of Marine simulators in use worldwide. Capabilities simulators range from radar only to full-scale ship-bridge simulators capable of simulating a 360-degree view.

Marine simulators can simulate a range of vessels in scenarios of real generic operating conditions e.g., ports and harbors. Simulators can be used to train Marines in a number of skills, from rules of the road and emergency procedures to bridge team resource operations.

A simulator does not train; it is the way the simulator is used that yields the benefit. It is easy to be impressed by the latest, largest full-mission simulator, but what is more important than the technology is how training methods are applied and whether it increases training effectiveness significantly, incrementally, or at all.

Physical scale-model, or manned-model, simulators are scale models of specific vessels that effectively simulate ship motion and handling in fast time. These models are especially effective for teaching shiphandling and manoeuvre skills.

New training structures assess training effectiveness from specific simulator features. While some work has supported the notion that higher levels of fidelity add to training effectiveness, others do not. For example, there is no evidence that in air carrier community that motion systems add to the training effectiveness of a simulator. Despite the widespread acceptance of motion systems, evidence is inconsistent.

We decided to work out simulation problems after units depart on deployment. There would be no impact on the response plan, which by then would have run its course. No one could object to the complexity of the task as the players involved would be trained and certified units. The fleet could focus them on whatever warfighting tasks seemed most critical, separate from a set training regimen.

It is difficult to determine the validity and degree of equivalency between simulator training and shipboard experience without an evaluation of transfer. The issue is if it can be determined that skills learned in a simulator can be employed aboard ship.

The most systematic way to test the application of this training to shipboard performance would be to systematically compare shipboard performance of simulator-trained individuals as group to performance of a group whose only difference is the lack of simulator training. Logistically, these studies are difficult to execute within the air carrier sector and may be even more difficult to execute in sectors lacking systematic organisational structure.

Marines duties and responsibilities are dictated by their work space, operating in the sometimes highly stressful and demanding work space of automated ships, short turnaround times in port, smaller crew sizes, and self-contained independence of long sea voyages. Deck officers must be knowledgeable in skills ranging from watchkeeping, navigation, cargo handling, and radar.

Marine pilots are highly skilled, functioning independently in scenarios requiring understanding the operation of ship-bridge equipment and manoeuvre capabilities of a wide range of vessels and to be able to safely manoeuvre through shallow and restricted waters. Pilots must also be knowledgeable in local working practices of ports and terminal operations.

In Applying simulation to training requirements, it is important to consider differences among simulators, that is, the different levels of simulator component capabilities. A high degree of realism is not always required for effective learning transfer. Often it is not necessary to use the most sophisticated simulator to meet all training objectives.

Levels of realism and accuracy required should match the training objectives. Simulators are used for Marine performance evaluation. These evaluations are usually informal and take the form of debriefings during the course of training. Occasionally, however, simulators are used for more structured evaluations.

Systematic application of the instructional design process offers a strong model for the structuring of new courses and the continuous improvement of existing courses. Instructors must ensure that all training objectives are met and themselves be trained to ensure that the simulator-based training courses meet the training objectives.

An effective training programme addresses Marines training needs with respect to knowledge, skills, and abilities. It exploits all media, from personal computer-based training to limited-task and full-mission simulators and applies the appropriate training tool to the specific level of training. For example, it would not be necessary to use a full-mission simulator for early instruction in rules-of-the-road training.

Systematic approach to training promotes convergence toward full-mission expertise by developing basic modules of skills in several steps. This approach encourages the assembly of ever-larger skills modules until the trainee can exploit training on a full-mission simulator.

Differences in instructional techniques can result in a significant range of material that can be covered. The way material is covered also affects the relative value of the learning experience. These factors may be affected by simulator features and fidelity; however, limitations in these areas can be minimised or offset to a large extent for certain instructional objectives. For example, we found bridge team training could utilise creative instructional design can be used to compensate for limitations in simulator capabilities.

Before we had the simulator, Marines were really slow in the first few days on the range because that’s the first time that they did it. But now getting some practice time in, you get better control and better performance on the range with the live assets, so it makes it more efficient. So the simulator is really useful, it’s invaluable as far as getting Marines ready to go.

Ship-bridge simulators and manned models can be effective in the development and renewal of Marine pilot skills in a number of significant areas including bridge team resource administration,, shiphandling, docking and undocking scenarios, bridge watch keeping, rules of the road, and emergency procedures.

Although current computer-based simulators are limited in their ability to simulate ship manoeuvre trajectories in shallow and restricted waterways and ship-to-ship interactions—capabilities important to pilot shiphandling training-- simulator training in areas such as bridge team/resource management can be of value to pilots.

Special-task simulators could be used effectively in Marine training. A limitation affecting widespread use is little availability of desktop simulations and interactive courseware. Marines must selectively sponsor development of interactive courseware with embedded simulations to facilitate understanding of information and concepts that are difficult or costly to convey by conventional means.

Use of simulations offers an effective mechanism for accessing not only Marines knowledge but ability to apply that knowledge, to prioritise tasks, and to perform several tasks simultaneously, all functions routinely required aboard ship. Must develop a framework for integrating simulation into training program before it undertakes more extensive use of simulation in training.

Must update and expand relevant task and subtask assessments for application to the Marines training needs. For the instructional design process to be effective, the course design should include the definition of training needs based on the steps required to complete identified tasks and subtasks for specific functions. Assessment must include dimensions that have been missing with respect to behavioural elements and specific steps needed to execute each subtask.

Standards for simulator-based training courses should be considered in the development of a plan for allowing substitution of simulator-based training for required sea time in the limited cases. The ratio of simulator time to sea time should be determined on a course-by-course basis and should depend on the quality of the learning experience, including the degree to which the learning transfers to actual operations.

The accuracy and fidelity of ship-bridge simulators can vary significantly from training station to station. These differences derive from the differences among original models used to develop the simulations and from station operator modifications to models after installation of the simulations.

Often, training station operators periodically modify simulation models after the initial validation. This process of continually modifying simulation models can result in inconsistent training programmes, as successive training sessions may be conducted with different simulations.

To address these concerns, simulators and simulations must be validated, all modifications must be documented and the simulation revalidated. The extent to which accuracy of a simulation needs to be validated will depend on the proposed use of the simulation.

Equivalency of simulation to real life has not been systematically investigated because existing task assessments are not adequate for this purpose and systematic application of task assessments based on performance have not been developed for this purpose.

The work of Marines is task-oriented. To be able to effectively apply simulator technology, it is important to systematically measure simulator effectiveness for training and to develop a mechanism to use simulators to improve the effectiveness of the transfer of skills and knowledge.

Ship control and navigation are visually supported tasks, especially in confined areas. Learning visual skills is an important process in the development of proficiency in control and navigation. In many simulators, the visual simulations are provided with systems that have limited capabilities to represent some stimuli. The result can be distortion of distance perceptions as an observer moves around the simulated bridge.

It is possible to stimulate lessons by participating in a simulation involving a crew change, a watch relief, two ports unfamiliar to the new watch officers, and a transit speed that was excessive for the situation but not readily apparent. As the scenario unfolded, bridge team members created enough pressures and problems for themselves without any instruction. The need for more effective passage planning and improved communications among bridge team members was no less apparent than it might have been in a situation artificially influenced by role reversals or problems inserted by the instructor.

The impact on training effectiveness of ship operational characteristics—such as vibration, sound, and physical movement of the bridge in roll, heave, and pitch—has not been verified and should be investigated before applying these systems to simulators.

Marines must assess the impact on training effectiveness of apparent limitations in simulator visual systems. If these limitations have a negative impact on training effectiveness, visual systems must be developed that overcome or minimise the negative aspects of current systems.

Comprehensive assessment addresses the large number of problems resulting from a lack of understanding within the Marines of the capabilities and limitations of an automated system. For example, when the radar signal-to-noise ratio is poor, the automatic radar plotting aids may "swap" the labels of adjacent targets.

If Marines are not aware of this limitation, Marines may be navigating under false assumptions about the position of neighboring vessels, increasing the chances of a casualty. Comprehensive assessment will identify misconceptions about automated systems that could then be remedied through training or equipment redesign.

Marines must undertake structured assessments of the need for simulation of vibration, sound, and physical movement. These assessments should include consideration of the possibly differential value of these various sources of information in different types of training scenarios.

Manned models are an effective training device for illustrating and emphasising the principles of shiphandling. They are particularly effective in providing hands-on ship manoeuvre in confined waters, including berthing, unberthing, and channel work. Manned models can simulate more realistic representations of bank effects, shallow water, and ship-to-ship interactions than electronic, computer-driven ship-bridge simulators.

The ability of a simulator to closely replicate manoeuvre trajectory of ship is a strong measure of the usefulness and value of the simulator for training. At present, simulation of ship manoeuvre trajectory is well developed in normal deep-water, open-ocean cases. In cases involving shallow or restricted waterways, ship-to-ship interactions, and extreme manoeuvre, fidelity may be significantly reduced.

Conduct of full-scale real-ship experiments would significantly advance the state of practice in model development. These experiments could supplement the limited information available for shallow and restricted water, slow speed, and reverse propeller operational information.

Marines must develop standards for the simulation of ship manoeuvre. Fidelity of the models must be validated through a structured, objective process. Standard models must be selected and tested in towing tanks and the results compared to selected full-scale real-ship trials of the same ships to provide benchmark metrics for validation and testing of simulators.

What we want to be able to do in the future, and this training station is the first step, is machine-to-machine metrics gathering. Allowing us to gather large amounts of metrics- so not just necessarily how they did on that event, in the actual actions they took on that event, but we can also gather historical metrics on the aircraft, its system, how well the systems have held up.

We can look at, automatically, machine-to-machine, look at the pilot and how proficient he is, how much flight time Marines received recently, and that will all help us build that bigger picture so we can inform leadership with the best simulation information we can give them.

10. Promote experimentation to include ways to accomplish acquisition, logistic, and support tasks through technological innovations, outsourcing, and other techniques.

Top 10 Enterprise Strategy Process to Provide Weapons Systems Users with Tools to Improve Readiness1. Available

Degree to which a system, subsystem or equipment is in a specified operable and committable state at the start of a mission, when the mission is called for at an unknown, i.e. a random, time.

2. Compatible

Capacity for systems to work together without having to be altered to do so--user must be able to open orders in either product-- products of the same or different types, or different versions of the same product.

3. Transport

Quality of equipment, devices systems permits ability to be moved from one location to another to interconnect with locally available complementary equipment, devices, systems or other complementary facilities.4. Interoperable

Condition achieved among communications systems or items of communications-electronics equipment when information or services can be exchanged directly and satisfactorily between them and/or their users.

5. Reliable

Measure quality, time and speed performance-- want to operate as long as possible without losses; and when you have losses, you want to fix them as quickly as possible.

Service of restoring failed equipment, machine or system to its normal operable state within a given inspection timeframe, using established practices and procedures.

8. Logistics Support

Integrated and iterative process for developing materiel strategy to optimises functional support, leverage existing resources, guide the system engineering process to quantify ownership cost over service life and decrease the logistics footprint

Mechanical devices enable trainees to use some actions, plans, measures, trials, movements, or decision processes prepare for use with must be designed to repeat, as closely as possible, the physical aspects of equipment and operational surroundings trainees will find at work place.

Dispatch control centres plan, route, and schedule personnel, supplies, and equipment movements over point of origin to port of debarkation to final destination or movements within area of operations. In some cases, the agencies are permanent.

For example, every MAGTF should have a full-time distribution and transportation section. For smaller MAGTFs, this may be no more than one Marines at the combat service support operations centre In other cases, movement control agencies are temporary.

Battalions, squadrons, regiments, and groups establish temporary movement control centres when their organisations are moving. Local standing operating procedures establish the composition and procedures for deployment control centres.

2. Materiel Transit Operation Centre

The Marine air-ground task force deployment and distribution operations center is the MAGTF commanders agency responsible for the control and coordination of all deployment support activities. It is also the agency that coordinates with geographic combatant commanders unit, and transportation component commands. When the MAGTF operates as part of a joint force, transit requirements are coordinated via operations centre for all geographic combatant commander’s service components.

3. Mobile Capability Command

Operational capability is located within the MAGTF command element, conducting integrated planning, provides guidance and direction, and coordinates and monitors transportation resources in its directorate role for MAGTF’s theater and tactical distribution processes.

4. Materiel Distribution Centre

The Materiel Distribution Centre is MAGTF’s distribution element with responsibility to provide general dispatch and receipt services consolidated distribution services and to maintain asset visibility to enhance throughput velocity and sustain operational tempo.

While in garrison, centre will make every effort to integrate/collocate with the base materiel transit operation, in order to maintain distribution competence. For deployed operations, together with logistics combat element, function to establish and operate the distribution network under deployed scenarios conditions.

5. Distribution Liaison Cells

Distribution liaison cells are task-organised distribution elements structured to perform tasks aboard Marine expeditionary units or forward operating areas to include but not limited to providing support for deploying MAGTFs.

6. Terminal Operations Organisations

Terminal operations organisations are integral to deployment and distribution systems, providing support at strategic, operational, and tactical nodes. Terminal operations organisations are established to include port operations group, beach operations group, railhead operations group, and the movement control agency of the landing force support party task-organised, manned, and augmented as required, to perform these tasks.

7. MAGTF Movement Control Centre

Standing organisation and the subordinate element to allocate, schedule and coordinate internal transportation requirements based on MAGTF commanders priorities supports the planning and execution of MAGTF ground movement scheduling, equipment augmentation, transportation requirements, materiel handling equipment, and other movement support on theater controlled routes, and register requirements to the joint movement centre for support. In addition, coordinates activities with installation operations and supporting commands

8. Major Subordinate Command Unit Movement Control Centre

Division and wing commanders deploy forces to support operational MAGTFs, directing transportation and communications assets needed to execute deployments. Each command activates its unit to support marshaling and movement of assigned subordinate units established down to the battalion, squadron, or independent company, as required, to serve as the unit transportation capacity directorate.

9. Base Operations Support Group

Bases from which Marine Corps operate forces unit forces deploy establish base operations support groups to coordinate their efforts with those of the deploying units. Bases operations support groups coordinate and manage transportation, communications, and other functional support requirements beyond organic capabilities to supported units during deployment.

10. Station Operations Support Group

Marine Corps operating forces air stations deploy establish station operations support groups coordinate efforts with those of deploying units. Air stations have transportation, communications, and other assets useful to all commands during deployment.

Because MAGTFs are organised to conduct operations under austere conditions Marine forces and MAGTF commanders provide the operational logistics capabilities necessary for conducting expeditionary operations, while tactical logistics are provided by MAGTF commanders and their subordinates. This expeditionary or temporary operations support will be withdrawn after the mission is accomplished. Expeditionary operations involve action phases which have strategic, operational, and tactical considerations.

2. Deployment

Deployment is the movement of forces to the area of operations. Deployment is initially a function of strategic mobility. Operational-level movement in theater completes deployment as forces are concentrated for tactical employment.

Deployment support permits the MAGTF commanders to marshal, stage, embark, and deploy their commands. Although deployment is a strategic and operational-level concern, tactical-level units may be required to assist the deployment.

3. Entry.

Entry is the introduction of forces into theatre accomplished by sea or air, but in some cases forces may be introduced by ground movement from an adjacent expeditionary base. Logistics capabilities are used in the entry phase to develop entry points e.g., an airfield or port, an assailable coastline, a drop zone, an accessible frontier.

4. Enabling Actions

Enabling actions are preparatory actions taken by the expeditionary force to facilitate the eventual accomplishment of the mission. Enabling actions may include seizing a port, or airfield for the introduction of follow-on forces and the establishment of necessary logistics and support capabilities. In case of disruption, enabling actions may involve the initial restoration of order and stability. In open conflict, enabling actions may involve use of force to stop competitor advance/capabilities, or capturing key terrain required for conduct of decisive actions.

5. Departure or Transition.

Because expeditions are by definition temporary, all expeditionary operations involve a departure of the expeditionary force or a transition to some form of permanent presence.. Departure is not as simple as the tactical withdrawal of the expeditionary forces from the scene because action requires withdrawing the force in a way that maintains the desired situation while preserving the combat capabilities of the force. For example, time must be taken for reload of ships to restore sustainment capabilities because either force may be instantly ordered to undertake another expeditionary operation.

6. Forward-Deployed Logistics Capabilities

Marine Corps maintains force reserve program me allow MAGTFs to sustain themselves for a significant period of time during combat operations. Sustainment gives MAGTFs the required endurance until theater-level supply is established.

Sustainment resources forward deployed with MAGTFs are augmented and replenished with reserve materiel and land prepositioning programmes. The resulting logistics self-sufficiency is fundamental, defining characteristic of expeditionary MAGTFs.

7. Reserve Materiel.

Combination of non-deployed force-held assets and reserve system programmed purchases collectively serve to ensure levels above MAGTFs can deploy with sufficient equipment and supplies to support period of contingency operations to provide reasonable assurance force can be self-sustaining until resupply channels are established. Usually,, MAGTF deploys with sufficient aviation-specific equipment and supplies.

8. Maritime Prepositioning Force.Maritime Prepositioning Force.is combination of prepositioned materiel and airlifted elements with limited sustainment capabilities. Smaller MAGTFs may be sustained ashore for more or less time depending on the size of the force, the number of preposition forces support of that force, and other variables such as inclusion of an aviation logistics support ship.

9. Prepositioned Programs.

Prepositioned vehicles, equipment, and supply stocks used for regional contingencies are configured to support a MAGTF. Stocking goals for prepositioned programme are the same as the prepositioned ships amd requirements can be filled with this equipment if directed.

10. Marine Expeditionary Planning Organisation

Preparation of plans for future operations are directed by administrative sections responsible for execution of expeditionary plans. Subordinate elements and smaller MAGTFs conduct the same planning with greater focus on the current battle and smaller size to dictate operational modifications.

Top 10 Implications of Emerging Marine Corps Logistics Concepts

1. Equipment Technological developments require logistics teams to be more innovative and forward-thinking than their predecessors. Emerging concepts for the 21st century could yield significant savings in manpower, supply inventories and maintenance costs, while at the same time increasing responsiveness, efficiency, and effectiveness of support.

2. Advancing Technologies

To further develop the operational capabilities inherent advancing technologies that are applicable to Marine Corps information and logistics systems and equipment are needed to reduce the logistics footprint and reliance on facilities ashore. Further, close liaison with industry will be essential to take advantage of technological breakthroughs.3. Logistics Information Systems

The Marine Corps, in conjunction with the Navy, must develop and field logistics systems that will provide near real time, over-the-horizon logistics information. These systems also need to be able to determine future over-the-horizon, surface, and aviation assault support requirements.

4. Development and fielding

Air and surface refueling capabilities will need to be present in the over-the-horizon logistics information essential to success, reducing the logistics footprint ashore, especially when sea-based logistics method is required.

5. Sea-basing

Sea-based logistics is yet another emerging support concept that requires technology, coupled with innovative thinking, to become a viable reality. When providing a sea-based logistics capability the Marine Corps needs to ensure that this capability is fully integrated with amphibious ships,, aviation logistics support ships, hospital ships, combat logistics force ships, offshore petroleum discharge systems, and logistics over-the-shore systems.

6. Total Asset Visibility

Total asset visibility systems, combined with improved business practices, can enhance expeditionary logistics anticipatory and more responsive to support the increased number and frequency of requirements to units at greater distances dispersed over a larger battlefield. Effective and accurate total asset visibility systems will be essential for rapid identification of Logistics Operations requirements, location in storage, immediate access, and tracking transportation assets for delivery. Successful unit logistics support will depend heavily on total asset visibility systems to maintain responsiveness—especially in expeditionary operational scenarios.

7. Distribution Systems

Planners must develop future distribution systems that provide rapid and responsive means to receive, store, access, break down, repackage, transport inland, and distribute on demand smaller unit packages. Innovations will be necessary in the packaging of unit daily requirements that will facilitate direct delivery from the container to the user. Improvements in shipboard selective warehousing, access, and offload technologies need careful examination to address the increased demand of deliveries, increased frequency of smaller sustainment slices on limited transportation assets. Sea-basing will demand that distribution systems provide the means to accomplish at sea, or preclude having to do at all, the functions that currently necessitate general offload and buildup ashore.

8. Supply

Expeditionary logistics capabilities could decrease the need to stockpile or warehouse supplies. Emerging technologies in commercial enterprise, military warehouse modernisation and potential extension to shipboard or even container designs may potentially improve receipt, storage, accountability, and issue operations to the point where one supply warehouse person could do the work in a fraction of the time. Sizable cost savings could also result from increased use of commercial sources for commonly used items, tools, services, and repair parts. This could eliminate the current methods used to procure, store, and maintain large inventories of repair parts or backup subassemblies.

9. Maintenance

Shipboard maintenance requirements of on-board equipment need accurate identification as well as reduction, wherever possible. Technology can yield significant benefits in this area. The advances here can be realised through incorporation of built-in maintainability and reliability features in equipment and supplies. Longer shelf lives for various supplies can substantially reduce on-board equipment maintenance and the rotation of needed supplies.

Equipment reliability and Availability Technology reduces the number of maintenance actions required to ensure equipment readiness and simplify repair. Significant savings become feasible in facilities, inventories, manpower, and the money required to maintain them.

Enhanced technological developments will also lead to growing procurements of commercial end items versus military-unique end items. Such efforts greatly reduce equipment cost, increase availability of and accessibility to commonly used parts, reduce mean time to repair, and increase overall equipment readiness.

10. Retention of Amphibious Capability

State-of-the-art technological logistics enhancements underscore Marine Corps naval character and why it must continually strive to improve its capability to conduct amphibious operations. The skills and knowledge built on amphibious capability are essential tools for influencing technological and tactical advances that produce time, manpower, cost, and other benefits.