Building an Industrial ARM: Managing Delay

Hello. Thank you for choosing to take a look at this video entitled Building an Industrial ARM-- AM65xx Architecture Differentiations for Industrial Applications. This is Part 2 of a multipart series, and this part is called Managing Delay. In the first part of this series, we took a look at a modern automation system called the baggage warehouse.
This is a baggage handling system that's in use in the Dubai International Airport, and it's a modern automation system built together with lots of different components to autonomously handle baggage handling. Anybody that's traveled knows when you check your bag in and it gets to the plane-- and hopefully it gets back to you at the airport of your destination-- that that bag must go through a lot of fun stuff, and this is a new way to do that. And it's a great example of what we're trying to do with modern automation systems.
We talked about the fact that system is composed of 234 PLCs, or Programmable Logic Controllers, and lots and lots of inputs and outputs and motor drives and lots of other fun stuff. And we dove into one of those PLCs to look at how a processor that was used in it might be architected, and the type of processing that it was going to be doing. So if you didn't get a chance to check that out, I hope you will.
There's a video of the baggage handling system that you can go check out as well. It's very interesting, and it sets a good context for this discussion, where we're going to talk about managing delay with a modern-day processor. So let's dive into that a little bit further.
So here's that processor that we built up from that last section, and you can see that this processor is connected to some inputs and outputs, probably through an industrial, ethernet-type of application. And if you build this together and put a lot of other fun stuff around it, that's how a PLC could be built up. And remember, the type of processing that we're doing is controlled processing.
So here's an example of that showing a physical process at the bottom, where inputs and outputs are being handled, and this is the actual reading of the sensors and moving of the motor, if you will. And then that input is transferred over to the processor for processing. A lot of times, in PLCs, this is [? lateral ?] logic. And then, of course, an output is calculated and put back out for the physical process to use to complete the overall process. And the interesting thing with control loop processing is a lot of times, this has to be done in a very specific timing. We call that a deadline.
Now, how severe that deadline is is going to vary by system, but sometimes it can be quite severe, cause the entire system to break down or even cause injury or loss of life. So these can be very serious systems, and of course, they need to be architected in such a way as to not meet those deadlines.
So let's take a look at the components of this processor. So we're diving deeper into what this processor needs to have in order to fit well into this overall architecture. And of course, the first component would be a way to actually talk to the inputs and outputs. We'll call that an industrial ethernet interface. And a lot of times, this has cues for moving data as well.
The next part is just going to be some way to connect this data to the rest of the system, and we'll call that, generically, buses and interconnect. And this is used primarily for data movement. And the last part is the piece that we all think about, and this is the actual processing core. And a lot of times, they come along with some caches for a local storage of data while it's doing the processing operation.
So let's zone in on this input part of the process-- the physical process that's going to happen. And we're going to need to transfer that input over to the processor so that it can be put into the control algorithm that's being used. So here we'll show that data coming from the input, transferring through the industrial ethernet. And then the question becomes, well, where do we put it once it's inside the processor? How do we store it?
Obviously, we need some kind of memory. It would be nice, probably, from a latency perspective to go ahead and put it directly in the caches of the core, but you can't write directly to cache, so that's not going to be a good solution for us. So modern-day processors, of course, use DDR quite a bit, so we'll use that as our example. So we'll have some DDR, and we'll store our input data in the DDR.
All right, so the next part of the process is the actual processing part, and that's going to start off by interrupting the core to let it know that it's got an input to go process. Then we can process that, of course, by reading it into the processor and then writing it back out. So that's the processing part of our control loop. And we'll show that data actually being read over to the core, processing happens, and of course, we'll write that back out as a local output. And to complete the control loop and get it down back to the physical process at least, we need to write that output out to the output device. And here's us doing that.
So that's really the full process. We'll bring that all up so you can see it all. So really, input, [? interrupt ?] the core, process, output, lot of communication and synchronization going on there as well. We're looking at this pretty conceptually at this point, but you probably get the overall feel of what's going on. And of course, we want to do that with plenty of time for the physical process to complete in order to meet our deadline, and the system that we're controlling to be controlled properly so that it does what it's designed to do.
All right, so that's all well and good when everything works, but working things kind of go wrong. So what happens if, for example, our input takes longer? What would cause that to happen? So here we're going to show that your input is stretching out, going beyond the time that it did in the first example we just looked at. So now there's some delay that's added from the time that the first input was done.
So now this input is taking longer than that one, and of course, if our processing takes the same amount of time and our output takes the same amount of time, it's probably not going to leave enough time for the physical process to complete before the deadline, and this could, of course, cause problems with our control system. So this is not good. Delays are bad in a control loop-designed system.
So we want to learn to avoid those, and we want to see how we can architect this system-- particularly the processor-- for being able to avoid these delays, or manage them. So how could we do this differently so that that delay is at least managed? So let's take a look at what we're trying to do.
We're bringing data into DDR. We're interrupting the core. But in this example, what if the core can't come service that interrupt immediately? Maybe it's busy doing something else, and it just can't come do that, and that's, of course, going to delay the processing.
We could try to handle this with scheduling. So you could use a priority-based scheduling algorithm to say, well, this is your most important thing, so when this happens, stop what you're doing and go do this. And you could use nested interrupts in a more physical way to do that. Those would be great examples, but sometimes they only take you so far, and it's hard to architect that and validate it for every circumstance in a complex system.
So that can be useful, but especially as we tap out processors at 1 to 2 gigahertz, for example, or we get limited by that, then that can only get you so much. So you might want something that allows you to go further, so we'll look at a different solution to allow that data to come in when that processor is busy and it causes that delay before reading the input to the DDR. And we'll do that by adding more cores.
So that's going to be a common theme throughout this presentation-- is in order to enable you to manage delay, and of course, get your deadlines taken care of properly, consistently, and systematically, a lot of times you need specific resourcing in order to handle real-time data. And this is going to be the first example of that. We're going to add a core-- so here's that new core.
And what that's going to allow us to do is manage interrupts with one core and have another core for your ladder logic. And then your ladder logic can constantly pull, or whatever it needs to do in order to find out-- to go get a new input. And as long as your cycle time and your ladder logic is appropriate, you should be just fine. So this is an example of adding another core in order to allow you to manage that delay and get rid of it, which, for controller processing, is what we want to do.
All right, so let's look at another example where delays can come in. So we've got our two cores. We've got that part of the problem solved. But where can another delay come from? And that would be the type of memory that we're using.
So we come in here. We write, or input the DDR. We interrupt the processor. We've got a dedicated core so you can come get the data. But what if, when it comes to get the data, that DDR is busy?
So before we saw that the core was busy. Now the core's available, but the DDR has to, maybe, refresh. And that's going to delay the data some and bring back our delay into our control loop and cause havoc in our system. So we don't want that to happen, so how do we solve that problem?
So think about it for a second. And one way you could do it is getting some more memory. So we'll add some on-chip memory to the device. There's our data coming in delayed. We don't want that, so we'll add on-chip memory where we can store our time critical data. And unlike the caches that we talked about earlier, on-chip memory can be written to directly.
So we'll move our input data over from DDR to our internal SRAM, and that allows us to write it directly there and use that to process it up to the core, and then internal SRAM won't have the refresh problem or the other delays potentially associated with DDR, and that allows us to get rid of that delay as well. So we added cores for the core processing. Now we've duplicated memory so that we have a particular path for our real-time data through internal memory so that we can avoid DDR delays.
All right, so let's continue with this theme. And now we've got a good path for our real-time data. So we're showing that here-- inputs and outputs for your real-time data going through industrial ethernet, into your internal memory, and being used by one of your cores to do the processing.
Got another core for interrupts and some other housekeeping stuff if we need to. And that's all good for the real-time path, but what about other stuff? And as you can anticipate, modern-day systems need more than just to process real-time data. They often have to process other data as well.
That's the non-real-time data. This could be management data, metadata. It's important to the system for the overall operation, but it's not part of a closed loop, or it's a very relaxed closed loop. So it doesn't have these real-time deadlines that we've been looking at that can impact the system in such a catastrophic way as causing it to truly fail.
So we have other non-real-time data that needs to flow through the system as well, and it needs to not interfere with our real-time data. And that's the data that we're showing there in the dark gray and the purple. So our non-real-time data-- as we add it to the system, can it cause problems with your real-time data?
And of course it can if you don't build things right, and it can certainly come in and start causing trouble, because if it's going over the same resources, those resources can get consumed by a lot of non-real-time data, because there's, a lot of times, more of that, and it causes the real-time data to miss its timing.
And one place that you might see that going on here is inside the queues of the industrial ethernet. If they're not designed correctly, if you have one queue for all of this data, of course some non-real-time data could come in there and clog these queues up, and cause your real-time data to get stuck behind non-real-time data and not meet its timing. So we need to architect this a bit differently.
And of course, the way that we're going to do that is much like we did with cores and memory. We're going to add some redundant resources and dedicate those just for real-time data. So we'll add some more queues. We'll pass our real-time data through these new real-time cues, if you will. And we'll pass our non-real-time data through the non-real-time time queues. And we can store those, of course, in DDR.
So now we've provided a separate path for our real-time data and our non-real-time data. And that's cost us more resources, but if we need to solve this problem and solve it reliably, then this is a good design. All right, so let's look at another place where we might be seeing some problems.
What about this [? bus ?] structure, where all the data's still flowing through? That [? bus ?] is not wide enough and fast enough, that could cause problems. And again, it's a single resource, so what we probably want to do is go ahead and add more buses so that we have a dedicated bus for our real-time data, and then we can use the non-real-time bus for our non-real-time data.
So we'll add another bus, pass our real-time data through that, and then take the non-real-time data through the other bus. So now we've taken quite a bit of the system, and we've added specific pieces to handle this real-time data, and we've allowed other parts for the non-real-time data so that you can design a system that can manage both of these and be successful.
One big block in the middle here that we've talked about is the interconnect, and this is obviously very important. It's doing all the data movement. We've talked about adding buses. We added more memory. We've added more cores. We've got lots of different things that want to communicate and send data across this interconnect, and this interconnect needs to be designed in a way to be able to manage that.
Mainly, it needs to know what real-time data is and what non-real-time data is, and make sure that it can keep the timing constraints of the real-time data. One of the biggest issues with an interconnect is you have a whole bunch of non-real-time data and you start a big transfer-- let's say through ethernet to DDR-- and the interconnect dedicates a bunch of resources to get that done as fast as possible, but then a real-time transfer that's small comes in and has to wait for that big transfer to happen. Obviously that's not going to be a good design, so we need an interconnect that can make sure that your real-time data still gets transferred, even when there are big transfers going on for non-real-time data.
All right, so we built this up pretty well. [? I'm going to ?] change the view just a little bit and look at a different aspect of this system. So we've looked at the innards of the chip pretty well, but let's think about the actual physical medium down here of our ethernet and see if that could be a potential cause for delay.
As we start to put our traffic across that ethernet transmission medium, can that medium actually become a cause for delay? And of course it can, depending on the bandwidth that's available and how it's architected. It certainly could become an issue where a bunch of non-real-time data's taken over that bus and using it, and your real-time data can't get transferred, and such like that.
So the actual ethernet itself could be a cause for delay in systems. And you can see that as that wire's being used by everything, and there's our delay picture showing that we're getting delay in the system. And whether it happens in the processor or not, if it's a delay in the system, that's going to cause problems.
So ethernet itself can become a problem, so we need a solution for that. We need to look at some of the new modern industrial ethernet adaptations that are happening. And of course, one to consider would be TSN.
And what that's going to primarily do is, like we've done in the rest of our discussion today, provide a separate path for RT data to non-RT data so that the RT data can always get on the wire when needed, and the non-RT data can be caused to wait or use a different timing so that the RT data can take the priority that it needs to take.
All right, so one more last view of this, as we've kind of gone through the entire system and systematically removed some delays. Let's think about the system as a whole and how it's going to operate. And one of the big questions would become, well, what if we want to do things at the same time?
So everybody agrees to do something at the same time. How do we make sure the system can do that? And remember, this can be a pretty big system. We talked about the baggage warehouse at 234 PLCs, and that whole system trying to work together. But this can be as simple as you and your colleagues trying to organize a lunch.
You all agree, OK, at noon, we're going to get together and go. And you want to go at the right time so you can all be efficient and organized and all those good things. So this is really just any system where you're trying to be time-synchronized. And of course, the first thing you need to do that is you need a time base.
So everybody's going to need a clock or a watch, depending on how you want to look at it. So we'll add those into our diagram, where everybody's going to try to do something at the same time, all right? So if you did this, the input had a slow watch, what would happen?
So everybody agrees we're going to do something at 1:30, but input's watch is slow, so when it reads it at 1:30, what's really happening? Well, it's really five minutes late, right? And five minutes late for the input's going to end up looking like a delay that we talked about earlier.
So there's our delay. So if the input reads late because it's got a slow watch, it's going to look like a delay to the rest of the systems. The input becomes that guy that's always five minutes late to your lunch meetings and causes you to be last in line in the cafeteria.
So what we want to do to solve this, of course, is have a robust synchronization system where all of the watches across the system are well-organized, and the system actually obeys those times so that your colleague can't always be five minutes late to lunch. So time synchronization is going to be important for removing delays in systems as well, and it'll be one of the strategies that we'll be looking at for how to handle this type of activity in a system.
And that brings us to an end of this section, where we've gone through a process or architecture quite thoroughly and looked at several places where delay can be introduced into the system, and how that delay can be bad for the system. Here's a way that we looked at all of those and went through cores and memory, and the package management [? of ?] [? the ?] queues, the buses, the interconnect. And the last couple things we talked about is the actual ethernet interface itself, and time synchronizing this whole system.
And these were all concepts that, if we apply those to a chip architecture, hopefully we can architect a chip that will be very effective for closed-loop processing, and would be a good choice for PLCs. So I hope you've enjoyed this talk. I certainly thank you for your time.
Here are some more resources for you if you'd like to go get more information, particularly for the AM654x. We have more training. It's available. You can go look at that as well.
Of course, we've got lots of information on the web as far as the data sheet, technical reference manual-- all the different things that you would need to do a design. We also have software available for the AM65x family. We have boards, the evaluation module, and the industrial development kit.
And of course, we are happy to help support you via our support forum, and you can search that for questions that have been asked, and hopefully that will help you. If not, you can ask a new question, and we will try to do our best to help you out there. Again, thank you so much for your time, and I hope you have a fabulous rest of your day. Thank you. 你好。 感谢您选择观看此视频， 本视频的主题是“为工业应用 构建工业ARM-- AM65xx架构差异化”。 这是系列的第2部分， 这部分称为管理延迟。 在本系列的第一部分中， 我们了解了一个名为行李仓库的 现代自动化系统。
这是一个在迪拜国际机场 使用的行李处理系统， 它是一个现代化的自动化系统， 与许多不同的组件一起构建， 可以自动处理行李。 任何旅行的人都知道 你什么时候登机行李并进入飞机 - 并希望它能 在目的地的机场回到你身边 - 这个行李肯定有很有趣的经历， 这涉及一个新方法。 这是我们尝试用现代 自动化系统做的一个很好的例子。
我们谈到了这样一个事实， 即系统由234个PLC或可编程逻辑控制器， 以及大量的输入和输出 以及电机驱动器和许多其他东西组成。 我们进入其中一个PLC， 看看它中使用的处理器是如何构建的， 以及它将要进行的 处理类型。 如果你还未弄明白， 我希望这次你会。
有一个行李处理系统的视频， 你也可以去看看。 这非常有趣，它为这次讨论 设定了良好的背景，我们将讨论 如何使用现代处理器来管理延迟。 所以让我们再深入探讨一下。 这是我们从最后一节构建的 处理器，你可以看到这个处理器 连接到某些输入和输出， 可能通过工业，以太网类型的 应用程序。 如果你一起构建它 并在其周围放置许多其他有趣的东西， 那就是如何构建PLC。 请记住，我们正在进行的处理类型 是受控处理。
所以这里有一个例子，它显示了底部的 物理过程，输入和输出正在处理中， 这是传感器的实际读数 和电机的移动。 然后将该输入传输到 处理器进行处理。 很多时候，在PLC中，这是横向逻辑。 然后，当然，计算输出并将其 放回以用于完成整个 过程的物理过程。 控制循环处理的有趣之处很多， 这必须在非常具体的 时间完成。 我们称之为截止期限。
现在，截止期限的严重程度因系统而异， 但有时可能会非常严重， 导致整个系统崩溃 甚至造成伤害或生命损失。 所以这些可能是非常严重的系统， 当然，它们需要以不符合这些期限的 方式进行架构。
那么让我们来看看这个处理器的组件。 我们正在深入研究这个处理器 需要具备的内容，以便更好地适应这种整体架构。 当然，第一个组件是 实际与输入和输出通信的一种方式。 我们称之为工业以太网接口。 很多时候，这也有移动数据的线索。
接下来的部分只是将这些数据 连接到系统的其他部分， 我们将其称为通用总线 和互连。 这主要用于数据移动。 最后一部分是我们都在思考的部分， 这是实际的处理核心。 很多时候，它们会在进行 处理操作时附带一些 用于本地存储数据的缓存。
因此，让我们在这个过程的输入部分进入区域 - 即将发生的物理过程。 我们需要将输入传输到处理器， 以便将其输入正在使用的 控制算法中。 所以在这里我们将展示来自输入的数据， 通过工业以太网传输。 然后问题就变成， 一旦它进入处理器，我们将它放在哪里？ 我们如何储存它？
显然，我们需要某种记忆体。 从延迟的角度来看，将它直接 放在核心的缓存中可能会很好， 但是你不能直接写入缓存， 所以这对我们来说不是一个好的解决方案。 因此，现代处理器当然使用DDR很多， 所以我们将使用它作为我们的例子。 我们有一些DDR，将输入数据存储 在DDR中。 这个过程的下一部分 是实际的处理部分， 然后开始通过中断核心让 它知道它有一个输入进行处理。 然后我们可以通过将其读入处理器 然后将其写回来进行处理。 这就是我们控制循环的处理部分。 我们将展示数据读取到核心， 处理，当然，我们会将其 作为本地输出写回来。 为了完成控制循环 并至少将其恢复到物理过程， 我们需要将输出写入输出设备。 这是我们这样做的方法。
所以这才是真正的全过程。 我们将把这一切都搞定，这样你就能看到一切。 输入，那里的核心， 过程，输出， 大量的通信和同步。 我们现在了解了概念， 你可能会对整体情况有所了解。 当然，我们希望有足够的时间 来完成物理过程 以满足我们的最后期限， 并且我们控制的系统可以正确控制， 以便它能够完成它的设计目标。
好吧，当一切正常时，这一切都很好， 但是工作有点出错。 例如，如果我们的输入需要更长时间，会发生什么？ 什么会导致这种情况发生？ 所以在这里我们将展示你的输入正在延长， 超出了我们刚才看到的第一个 例子中的时间。 有一些延迟是从第一次 输入完成时添加的。
所以现在这个输入花费的时间比那个更长， 当然，如果我们的处理花费相同的时间 并且我们的输出需要相同的时间， 那么在物理过程完成之前 可能不会留下足够的时间满足期限要求， 这当然会导致我们的 控制系统出现问题。 所以这不好。 控制回路设计系统中的延迟很糟糕。
因此，我们希望学会避免， 我们希望看到我们如何能够构建这个系统 - 特别是处理器 - 以便能够避免这些延迟或管理它们。 那么我们怎么能以不同的方式做到这一点， 至少可以控制延迟？ 让我们来看看我们正在尝试的方法。
我们将数据带入DDR。 我们中断核心。 但是在这个例子中，如果核心不能立即服务 中断怎么办？ 也许它正在忙于做其他事情， 而且它不能做到这一点， 当然，这会延迟处理。 我们可以尝试通过安排来处理这个问题。 所以你可以使用基于优先级的 调度算法，这是你最重要的事情， 所以当发生这种情况时， 停止你正在做的事情并去做优先事项。 并且您可以以更物理的方式使用嵌套中断 来执行此操作。 这些都是很好的例子，但有时它们只会 让你止步于此，并且很难在复杂系统的 每个环境中构建 并验证它。
所以这可能很有用， 但特别是当我们以1到2千兆赫兹的 速度点击处理器时，或者我们受到限制时， 这会让你止步于此。 所以你可能想要一些更好的东西， 所以我们将看一个不同的解决方案， 允许在处理器忙时导入数据， 并在读取DDR输入之前导致延迟。 我们将通过添加更多内核来实现这一目标。
这将成为整个演示文稿中 的一个共同主题 - 为了使您能够管理延迟，当然，适当、一致和系统地 处理您的期限， 很多时候您需要按顺序 进行特定的资源分配处理实时数据。 这将是第一个例子。 我们要添加一个核心 - 所以这就是新核心。 而这将允许我们做的是 用一个核心管理中断， 并为梯形逻辑提供另一个核心。 然后你的梯形逻辑可以不断地拉动， 或者不管它需要做什么来找出 - 去获得 新的输入。 只要你的循环时间和阶梯逻辑是合适的， 就应该没问题。 因此，这是添加另一个核心的示例， 以便您可以管理延迟并摆脱它， 对于控制器处理， 这就是我们想要做的。
让我们来看另一个 可能出现延迟的例子。 所以我们有两个核心。 我们已经解决了部分问题。 但另一个延迟可以从哪里来？ 这将是我们正在使用的内存类型。
所以我们来到这里。 我们写或输入DDR。 我们中断处理器。 我们有一个专用的核心，所以你可以得到数据。 但是，如果要获得数据，DDR会恨忙吗？
在我们看到核心繁忙之前。 现在核心可用，但DDR必须刷新。 这会将数据延迟一些， 并将我们的延迟带回控制回路 并对我们的系统造成严重破坏。 我们不希望这种情况发生，那么我们 如何解决这个问题呢？
想一想。你可以做到的一种 方法是获得更多的记忆体。 我们将为设备添加一些片上存储器。 我们的数据延迟。 我们不希望这样，所以我们 将添加片上存储器，以便存储我们的时间关键数据。 与我们之前谈到的缓存不同， 芯片存储器可以直接写入。
因此，我们将输入数据从DDR 移至内部SRAM，这样我们就可以 直接在那里写入并使用它 来处理核心，内部SRAM 不会出现刷新问题或 可能与DDR相关的延迟其他问题， 这也使我们能够摆脱这种延迟。 所以我们为核心处理添加了核心。 现在我们已经增加了内存，因此我们通过内部存储器 获得了实时数据的特定路径， 这样我们就可以避免DDR延迟。
让我们继续。 现在，我们已经为我们的实时数据提供了良好的途径。 因此，我们在此处显示 - 您的实时数据的 输入和输出通过工业以太网， 进入内部存储器，并由您的一个核心 用于处理。
如果我们需要，还有其他核心用于 中断和其他一些内务管理。 这对于实时路径都有好处， 但其他呢？ 正如您所预料的那样，现代系统 需要的不仅仅是处理实时数据。 他们经常也必须处理其他数据。
那就是非实时数据。 这可以是管理数据，元数据。 对整个操作系统来说这很重要， 但它不是闭环的一部分， 或者是一个非常松散的闭环。 因此，它没有我们一直在关注的 这些实时期限， 这会导致系统以灾难性的方式影响系统， 从而导致系统真正失败。
因此，我们还有其他需要 流经系统的非实时数据， 并且它不需要干扰我们的实时数据。 这就是我们在深灰色 和紫色中显示的数据。 所以我们的非实时数据 - 当我们将它添加到系统中时， 是否会导致实时数据出现问题？
当然，如果你没有把事情做好， 它肯定会进入并开始造成麻烦， 因为如果它经过相同的资源， 这些资源可能会被很多非实时数据所消耗， 因为这会导致 实时数据错过时间。
你可能会看到这里发生的一个地方 是在工业以太网的队列中。 如果它们的设计不正确， 如果你有一个队列来存储所有这些数据， 那么当然会有一些非实时数据 进入并阻塞这些队列， 导致你的实时数据被卡在非实时数据， 无法满足期限要求。 所以我们需要以不同的方式构建。 当然，我们要这样做的方式 与核心和内存非常相似。 我们将添加一些冗余资源， 并专门用于实时数据。 所以我们将添加更多队列。 如果愿意的话，我们会通过这些新的实时 线索传递我们的 实时数据。 我们将通过非实时时间队列 传递非实时数据。 当然，我们可以将这些存储在DDR中。
所以现在我们为实时数据 和非实时数据提供了单独的路径。 这会花费我们更多的资源， 但如果我们需要解决这个问题并可靠地解决它， 那么这是一个很好的设计。 让我们来看看另一个 可能会遇到一些问题的地方。 那这个呢总线结构， 所有数据仍然流经的总线？ 那个总线不够宽和足够快， 可能会导致问题。 再次，它是一个单一的资源， 所以我们可能要做的是继续添加更多的总线， 以便我们有一个专用的总线用于我们的实时数据， 然后我们可以使用非实时总线 用于我们的非实时数据。
因此，我们将添加另一条总线， 通过它传递我们的实时数据， 然后通过另一条总线获取非实时数据。 所以现在我们已经采用了相当多的系统， 我们已经添加了特定的部分来处理这些实时数据， 我们已经允许非实时数据的其他部分， 以便您可以设计一个系统管理 并取得成功。 我们谈到的中间部分大块是互连， 这显然非常重要。 它正在进行所有数据移动。 我们谈到了增加总线的问题。 我们增加了更多内存。 我们添加了更多内核。 我们有很多不同的东西 需要通过这种互连进行通信 和发送数据，而这种互连 需要以能够管理它的方式进行设计。
它需要知道实时数据是什么 以及非实时数据是什么， 并确保它可以保持实时数据的 时序约束。 互连的一个最大问题是 你有大量的非实时数据， 你开始大转移 - 通过以太网到DDR-- 并且互连专用了大量资源来 尽可能快地完成， 但随后实时传输量很小， 必须等待大传输。 显然，这不是一个好的设计， 所以我们需要一个互连，以确保 您的实时数据仍然可以传输， 即使有大量非实时 数据传输。 所以我们很好地构建了它。 稍微改变视图 并查看该系统的不同方面。 我们已经很好地研究了芯片的内部结构， 但让我们考虑一下以太网的实际物理介质， 看看这是否 可能导致延迟。
当我们开始将流量放在那个 以太网传输介质上时，该介质真的会成为 延迟的原因吗？ 当然，它可以，取决于可用的 带宽和它的架构。 它当然可能成为一个问题， 其中一堆非实时数据 通过该总线并使用它，您的实时数据 无法传输，等等。
因此，实际的以太网本身 可能是导致系统延迟的原因。 你可以看到，因为所有东西 都在使用这条线， 而且我们的延迟图片 显示我们在系统中遇到了延迟。 无论是否在处理器中发生， 如果它是系统中的延迟，那将导致问题。
因此，以太网本身可能成为一个问题， 因此我们需要一个解决方案。 我们需要看一下一些新的 现代工业以太网方法。 当然，要考虑的是TSN。
就像我们今天在讨论的 其余部分中所做的那样， 为RT数据提供单独的路径到非RT数据， 这样RT数据总是可以在需要时连线， 并且可以使非RT数据等待 或使用不同的时序，以便RT数据 可以获得它需要的优先级。
这是对此的最后一种看法， 因为我们已经完成整个系统 并系统地消除了一些延迟。 让我们考虑一下整个系统 以及它将如何运作。 如果我们想要同时做事， 那么其中一个重大问题就会变成什么？
每个人都同意在同一时间做某事。 我们如何确保系统能够做到这一点？ 请记住，这可能是一个非常大的系统。 我们谈到了234 PLC的行李仓库， 整个系统试图一起工作。 但这可以像你和你的同事 试图组织午餐一样简单。
你们都同意：中午 我们会一起去。 而且你想要在正确的时间去， 这样你就可以高效，有组织等 所以这实际上只是 您尝试进行时间同步的任何系统。 当然，你需要做的第一件事是 你需要一个时间基础。
所以每个人都需要钟表或手表， 这取决于你自己的方式。 所以我们将这些添加到我们的图表中， 每个人都会尝试同时做某事。 如果你这样做，输入的表运行缓慢， 会发生什么？
每个人都同意我们会在1:30做一些事情， 但输入的手表很慢， 所以当它在1:30读取时，会发生什么？ 这真的晚了五分钟，对吗？ 并且迟到五分钟的输入 结果看起来像我们之前 谈到的延迟。
所以这是我们的延迟。 如果输入读取的时间较晚，因为它的表速度很慢， 那么对于其他系统来说，它看起来就像是一个延迟。 输入成为你午餐会议 总是迟到五分钟的那个人， 并使你成为自助餐厅队列中的最后一人。 因此，我们想要解决的问题 当然是拥有一个强大的同步系统， 整个系统中的所有手表组织良好， 系统实际上遵循这些时间， 以便您的同事不能总是 五分钟迟到吃午饭。 因此，时间同步对于消除系统中的延迟也很重要， 并且它将成为我们 将如何在系统中处理此类活动的 策略之一。
本节内容到此结束， 我们已经彻底地说明了 一个过程或体系结构，并查看了几个可以 将延迟引入系统的地方， 以及这种延迟对系统的影响。 这是我们查看所有这些并通过内核 和内存以及包管理的方式队列， 总线，互连。 我们谈到的最后几件事 是实际的以太网接口本身， 以及整个系统的时间同步。
这些都是概念，如果我们 将它们应用到芯片架构中， 我们希望能够构建一个对闭环处理 非常有效的芯片， 并且是PLC的理想选择。 我希望你喜欢这个演讲。 感谢各位。
如果您想获得更多信息， 特别是AM654x， 我会为您提供更多资源。 我们有更多的培训。 可供您选择。 你也可以去看看。 当然，就数据表， 技术参考手册而言，我们在网上有大量信息 包括您需要进行设计的所有不同内容。 我们还为AM65x系列提供软件。 我们有板、评估模块 和工业开发套件。
当然，我们很乐意通过我们的支持论坛 为您提供支持，您可以搜索 已经提出的问题， 希望这对您有所帮助。 如果没有，您可以提出一个新问题， 我们会尽力帮助您。 再次，非常感谢， 希望各位一切顺利。 谢谢。