Experimenting with Arm technologies before committing to the manufacturing license fee: this is the new opportunity now offered by Arm. The company has announced it is expanding the ways existing and new partners can access and license its technology for semiconductor design. Called ‘Arm Flexible Access’, the new engagement model enables SoC design teams to initiate projects before they license IP and pay only for what they use at production. This way, design teams will get more freedom to experiment and evaluate different options. As the company explained in a press release, typically partners license individual components from Arm and pay a license fee upfront before they can access the technology. With ‘Arm Flexible Access’ they pay “a modest fee” for immediate access to a broad portfolio of technology, then paying a license fee only when they commit to manufacturing – followed by royalties for each unit shipped. The portfolio made available through this new engagement model includes all the essential IP and tools needed for an SoC design: the majority of Arm-based processors within the Arm Cortex-A, -R and -M families, as well as Arm TrustZone and CryptoCell security IP, select Mali GPUs, system IP alongside tools and models for SoC design and early software development. Access to Arm’s global support and training services are also included.
Memristors advancements
Researchers at the University of Michigan have built the first programmable memristor processor, or – as it is described in their paper published on Nature Electronics – “a fully integrated reprogrammable memristor–CMOS system for efficient multiply–accumulate operations.” Besides the memristor array itself, the chip integrates all the other elements needed to program and run it. Those components include a conventional digital processor and communication channels, as well as digital/analog converters to interface the analog memristor array with the rest of the chip. As reportedly claimed by the researchers, memristors promise a 10-100 times improvement – in terms of performance and power – over GPUs in machine learning applications, thanks to their in-memory processing capabilities.
(more…)