Which Instruction Pushes All Of The 32-bit General-purpose Registers On The Stack?
In that location was a time when microprocessors and microcontrollers were distinct devices. In that location was never a question as to which one you were dealing with. But changes in the retentiveness architecture accept dirty the distinction in mod devices.
There are a number of ways in which microprocessors and microcontrollers could perchance be differentiated. Only in that location is no universal agreement equally to how that should happen, and some folks — although definitely non all — have come to the conclusion that any distinction might not even matter all that much anymore.
"The departure betwixt an MCU and an MPU has become much fuzzier in contempo years," said Colin Walls, embedded software technologist at Mentor, a Siemens Business concern. "Originally, an MCU integrated CPU, retentivity and peripherals on ane bit. Nowadays, although this is still the instance, information technology'southward very mutual to attach boosted external memory, as the MCUs are powerful enough to back up more sophisticated applications."
A tale of two markets
There was a fourth dimension when computing chips targeted 2 very different markets. On the more visible front, devices were targeted at mainstream calculating, where performance was the primary consideration. Referred to as "microprocessors," these single-chip computers powered personal computers and larger systems.
Today we run into them in laptops, desktops, and servers of all types. What's key is the fact that they're general-purpose engines, intended to run whatever number of programs that aren't known a priori. Primary memory is DRAM, and non-volatile storage is the hard drive (or SSD).
On the less visible side was the world of embedded computing. Here there was a need for modest calculating power with a defended purpose. The intended plan likely would be implemented in firmware so that the entire organization — plan and all — could be verified prior to shipping. Memory requirements were much more limited, and SRAM and non-volatile memory for code storage could be integrated onto the same chip as the CPU. Critically, real-time response was often important.
This marketplace also tended to be used in environments with very specific I/O needs. Some might be driving motors. Others might be processing audio or reading sensors. It became useful to integrate the specialized peripheral interface hardware onto the aforementioned chip every bit the CPU and memory. This resulted in a wide range of chips with differing characteristics. Simply overall, CPUs integrated with SRAM, non-volatile memory, and specialized peripherals were known equally "microcontrollers."
Microprocessors take rocketed upward to 64-scrap monsters, while there are still plenty of 8-bit microcontrollers. Only in the middle, some changes occurred to make the distinction far less articulate.
While not the sole determining factor, the integration of flash retentiveness was an important feature of the microcontroller. But flash retention has non been available at the most advanced microcontroller nodes, and then many devices marketed as microcontrollers use external flash retentivity instead of embedded flash. They also may apply external DRAM.
In fact, a process called "shadowing" takes lawmaking from external flash memory and copies it into DRAM, from which the code is and so executed. And in order to meliorate performance, caching may be included. That makes the CPU/memory subsystem pretty much indistinguishable from that of a microprocessor. So is information technology at present a microprocessor? Is there no longer a meaningful deviation?
Fig. 1: The peak is a typical simplified prototype of a microprocessor arrangement. The DRAM and hard drive are external to the scrap. The lesser shows an older microcontroller on the left and a newer ane on the right that no longer looks and then different from a microprocessor. Source: Bryon Moyer/Semiconductor Engineering
Possible differentiators could include the following:
- CPU capabilities: If the CPU has a sophisticated pipeline, with speculative execution and other superscalar capabilities, that could qualify it every bit a microprocessor. Exactly where the transition would occur, however, is not well defined.
- More bits: An 8-bit device is more than likely to be considered a microcontroller, while a 64-bit device is most likely a microprocessor. Merely so again, the very first microprocessor was four $.25, and so this is more a thing of history than a defining characteristic.
- Operating system: One might classify co-ordinate to the type of operating system that a machine tin can run. If it runs Linux, and then you might call it a microprocessor. If it ran only smaller real-fourth dimension operating systems or fifty-fifty bare metal, then y'all could call it a microcontroller. This leaves a lot of middle ground for devices that peradventure could run Linux.
- Timing requirements: Microcontrollers are oft, although not exclusively, used for applications that require hard or soft existent-time response. Microprocessors generally can't be used for that purpose.
- Multicore: It'south much more likely that a multicore processor would be considered a microprocessor, especially if the cores are identical and managed symmetrically. But specialized devices may have more than than one processor, with some being dedicated to a specific task similar digital signal processing. They're likely to be considered microcontrollers, just are they? Likewise, a device doesn't have to be multicore to exist a microprocessor, so this really isn't a good determiner.
- Purpose: You could say that a general-purpose device is a microprocessor, while a single-purpose device is a microcontroller. But that'due south actually all about how the device is used. There are devices you could use either way. What would you lot then call that device in the absenteeism of knowing how it'southward used?
- Peripherals: This leaves the specialized peripherals as a possible differentiator. It'southward probably truthful that full-on microprocessors won't take those peripheral circuits, largely because they're intended for general-purpose use rather than being tied to a specific application. So y'all could probably say that, if it has such peripherals, information technology'south a microcontroller. Simply the contrary isn't true: the lack of peripherals doesn't mean that it's a microprocessor.
Each of the obvious characteristics fails or is, at best, unsatisfactory. So where does that leave us? We asked a number of folks their opinions, and there was no consensus whatever. Here are some of their thoughts.
Marc Greenberg, grouping director of product marketing, IP group at Cadence: "I don't know if there's some 'official' engineering definition of the difference between microcontroller and microprocessor. A quick search seems to reveal that the presence of NVM on the dice makes it an MCU, but at that place are bits of NVM on all kinds of microprocessors. And microprocessors may accept MCUs on the same die too, then what is that? The tiniest cache-less processors may yet take some registers and SRAM. Is a sequencer coded in RTL really whatever different from a general-purpose processor executing from a ROM? So the distinction between a microcontroller and a microprocessor is somewhat arbitrary, and that means that information technology tin be whatever you want it to exist. When I think of microprocessors, I retrieve of larger processors that are controlling full general-purpose machines (like desktops, servers, tablets, etc) and microcontrollers as the center of embedded devices that are headless or have smaller specific-purpose UIs."
Grant Martin, distinguished engineer at Cadence: "From Wikipedia, a one-liner for each. 'A microcontroller is a small calculator on a single metal-oxide-semiconductor integrated circuit chip. A microprocessor is a figurer processor that incorporates the functions of a primal processing unit of measurement on a single (or more) integrated circuit (IC) of MOSFET structure.' Both of those are pretty useless, just bespeak to the arbitrariness of trying to distinguish them. If yous drill into this a bit, a microprocessor has the functions of a CPU, and so it's the 'estimator processor,' whereas the microcontroller is a more complete 'computer,' so that means microcontrollers include microprocessors, which is contrary to the convention. But is a 16-manner server processor with multiple processor 'cores' a microprocessor anymore? And is a multi-way heterogenous SoC in, for example, a jail cell telephone — which might include multiple awarding processing cores, multiple DSPs for audio, video, image processing, a GPU or two for rendering images on the screen, and a neural-internet processing unit of measurement, just for fun — a 'microcontroller? From my point of view, it is time for the industry to retire these somewhat archaic terms and instead use more precise, albeit longer and more than descriptive (what I would phone call 'boringly precise') terms."
Jeff Hancock, senior production director at Mentor, a Siemens Business: "From a arrangement software perspective, a microcontroller is expected to exist amenable to applications that direct translate and control hardware sensors and actuators. Such access often involves consistent and reliable educational activity timing, which is at odds with the needs of a general-purpose microprocessor. The full general-purpose microprocessor aims to optimize throughput, whereas the microcontroller often optimizes latency. Then if you want a large database, a microprocessor is likely advisable. If you want fine motor control, a microcontroller is for you. The external memory and cache certainly tin can disrupt the determinism of a microcontroller, but this is a long way from declaring it equivalent to a microprocessor. In particular, the existence of external retentivity does not crave all processing units in the MCU to apply external memory exclusively, or even at all. Systems can be constructed with isolated subsystems that allow critical workloads to proceed in parallel with less disquisitional application-level systems that make use of larger external memories and caches."
Mentor's Walls: "From the software engineer'southward point of view, this is an interesting claiming. There are likely to exist two memory regions at not-contiguous addresses. The on-board retentiveness is small, simply faster, and then is best reserved for code that benefits from the optimal speed, similar the real-time operating system. This has ii implications: the development tools must be flexible enough to map the code correctly onto the memory, and the RTOS must be small enough [generally very scalable] to fit into the on-bit memory."
Nicole Fern, senior hardware security engineer at Tortuga Logic: "Microcontrollers historically have been associated with embedded systems, where the requirements of low cost and depression ability are more than of import than performance. But with the advent of mobile computing and IoT edge computing, complex processing is at present required for many embedded systems. This results in MCU offerings that look more like MPUs, with options for external memory and caches offer increased performance and configurability, but marketed for the embedded space. The divergence between the terms MPU and MCU for these situations may merely be dependent on the lineage of the system the CPU is being integrated into."
Thomas Ensergueix, senior director for low-power IoT business at Arm: "Over recent years the lines have blurred between microcontrollers and microprocessors. Ane key deviation betwixt MCUs and MPUs is software and development. An MPU will support rich OSes similar Linux and the related software stack, while an MCU traditionally will focus on bare metal and RTOSes. Information technology is upwardly to the software developer to determine which software environment and ecosystem fits all-time for their application earlier making the conclusion of which hardware platform, MCU, or MPU works best. As modern MCUs have transitioned to 32-chip, nosotros also take seen a steep increment in performance, which has helped to close the gap between MCUs and MPUs. For example, many Arm Cortex-M7 based MCUs deliver over 100 Dhrystone MIPS, or over 2,000 points in CoreMark. Many of these devices also have a very large built-in retentiveness or offering a fast interface to connect external memories. This has ensured that functioning and retentivity are no longer bottlenecks for MCUs and has brought them closer to low-end MPUs.
Conclusion
So in the end, does information technology really matter if we blast down the distinction? Probably not. Applications come with requirements, and it's the requirements that will decide which device is used – regardless of what we call information technology.
Related
The MCU Dilemma
Microcontroller vendors are breaking out of the box that has constrained them for years. Will new memory types and RISC-V enable the adjacent round of changes?
An Increasingly Complicated Relationship With Memory
Pressure level is building to change the programming paradigm associated with memory, but and so far economic justifications have held back progress.
MCU Knowledge Center
MPU Cognition Center
Which Instruction Pushes All Of The 32-bit General-purpose Registers On The Stack?,
Source: https://semiengineering.com/mpu-vs-mcu/
Posted by: hernandezagantiched.blogspot.com
0 Response to "Which Instruction Pushes All Of The 32-bit General-purpose Registers On The Stack?"
Post a Comment