This Cortex-A8 processor achieves an additional 4x performance improvement over the 300MHz ARM9 through its superscalar architecture, which allows implementation of instruction-level parallelism within a single processor.
What does ILP stand for?
ILP stands for Instruction-Level Parallelism
This definition appears very frequently
See other definitions of ILP
We have 144 other meanings of ILP in our Acronym Attic
- Information and Language Processing (UK)
- Information Leakage Prevention (IT security)
- Information Led Policing
- Information Lifecycle Protection (data management)
- Information Literacy Program (various locations)
- Initial Learning Program
- Initial List Price
- Injection-Locking Process
- Institut Latihan Perindustrian (Malaysia)
- Instituto Libertad Y Progreso (Spanish: Institute for Liberty and Progress; Bogotá, Colombia)
Samples in periodicals archive:
Chapter 2: Instruction-Level Parallelism and Its Exploitation.
After the processor core has been created using these new VLIW instructions, software developers programming the Xtensa core need only use the standard Xtensa C/C++ Compiler (XCC), which automatically extracts the instruction-level parallelism from C/C++ code and bundles operations into VLIW instructions whenever possible.
The architecture exploits multiple levels of parallelism: * task-level parallelism between the system processor, DSP processor and DPU * data-level parallelism (DLP) with multiple lanes executing the same instructions on different data in parallel * instruction-level parallelism (ILP) via very long instruction word (VLIW) driving multiple arithmetic logic units (ALUs) per lane * sub-word single instruction multiple data (SIMD) in which each ALU can operate on multiple operands On the DPU, a kernel function runs identically on every lane processing different data.
35mW/MHz by efficiently supporting instruction-level parallelism and data-level parallelism.
This enables Secure64 to capitalize on the chip's large register sets and high instruction-level parallelism to significantly boost network performance.
OpenIMPACT produces highly optimized binaries (on par with Intel's icc compiler) through aggressive use of predication, speculation, instruction-level parallelism, and profile-based optimizations.
The result is that the Jazz 2 architecture can provide code densities that are 50% more efficient than Jazz and can rival single-issue DSP architectures that do not provide instruction-level parallelism.