Posts Tagged ‘armv8’

Samsung Exynos 7 ARM Cortex A57 Processor Linux Code Submitted

August 28th, 2014 5 comments

Samsung has not announced any 64-bit processor yet, but according to a recent patchset Exynos 7 may be their first 64-bit ARM SoC, and it will be based on the faster Cortex A57 cores. A quick way to learn a little more is to check the device tree file (exynos7.dtsi).

Samsung_Exynos_7Here’s an interesting snippet:

+	cpus {
+		#address-cells = ;
+		#size-cells = ;
+		cpu@0 {
+			device_type = "cpu";
+			compatible = "arm,cortex-a57", "arm,armv8";
+			reg = ;
+		};
+	};

As it stands, Exynos7 would be a single core Cortex A57 processor. This sounds unlikely that a company would launch a single core processor at this stage, so it’s probably early code that may not support all cores just yet.  We also know Samsung uses ESPRESSO board for development with Samsung Exynos 7 processor and 3 GB RAM.

Thanks to David for the tips.

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter

How to Build and Run Android L 64-bit ARM in QEMU

August 23rd, 2014 9 comments

Most people can’t afford Juno Cortex A57/A53 development board, and mobile processors based on the latest 64-bit ARM cores are only expected by early 2015. But thanks to a small team at Linaro, you can now build and run Android L 64-bit ARM in the latest version of QEMU that supports Aarch64. Alex Bennée, an engineer working for Linaro, wrote a blog post in Linaro’s Core Dump blog explaining the Android emulator is actually based on QEMU,  the differences compared to mainline QEMU, the work they’ve done on Android L at Linaro, and most importantly, provided the basic steps showing how to build and try Android L 64-bit ARM (ARMv8) in QEMU. I’ve just done that, but unfortunately, albeit the builds complete, I could not manage to start Android L in QEMU yet. If you still want to give it a try, you’ll need a Linux PC, and patience, as it may take about one day to retrieve the source code, and build everything from source.


I’ve done all this in a computer running Ubuntu 14.04 with an AMD FX8350 processor and 16 GB RAM.

First, you’ll need to install an ARM 64-bit toolchain, some dependencies, and tools:

sudo apt-get install gcc-aarch64-linux-gnu build-essentials git bison zlib1g-dev \
libglib2.0-dev libpixman-1-dev gperf android-tools-adb

The next step is to cross-compile a Linux 3.10 kernel for Android:

mkdir -p ~/edev/linaro
git clone
cd linux-android
git checkout ranchu-linaro-beta1

There’s a bug include the current version of the toolchain in Ubuntu 14.04 ( which prevents the build to complete. You can either remove CONFIG_DEBUG_INFO=Y in arch/arm64/configs/ranchu_defconfig (I did that), or update your toolchain. Let’s complete the build:

ARCH=arm64 make ranchu_defconfig
ARCH=arm64 make CROSS_COMPILE=aarch64-linux-gnu- -j8

Now you need to build the Android Open Source Project (AOSP). If you haven’t done so, you’ll have to install the repo tool:

mkdir ~/bin
curl > ~/bin/repo
chmod a+x ~/bin/repo

Then get AOSP source code (master as below, or l-preview branch):

cd ..
mkdir AOSP
repo init -u
repo sync

The last step can take a few hours depending on your internet connection to Google servers.
Now download and apply a patch made by Linaro:

tar -xvf linaro-devices.tar.gz

Possibly configure git:

git config --global "[email protected]"
git config --global "Your Name"

You need to apply a patch for qemu:

pushd system/core
patch -p1 < android-init-tweaks.diff 

And build Android L for ARMv8:

source build/
lunch ranchu-userdebug
m -j8

The last step will again take a while. It took my machine 2 or 3 hours, and the total time was actually a bit more than than as my PC suffered two thermal shutdowns during the build, and I had to restart the build twice. The last time, I decided to underclock my CPU to 3.4 GHz, and the build went through.

The last step before running Android L is to build QEMU:

cd ..
git clone
cd qemu-arm

git checkout ranchu-linaro-beta1
make -j8

Builds should now all be successfully complete. We just need to create some symlinks helping to shorten qemu command line, and run QEMU:

cd ..
ln -s linux-android/arch/arm64/boot/ ranchu-kernel
ln -s AOSP/out/target/product/ranchu/ ranchu-build
./qemu-arm/aarch64-softmmu/qemu-system-aarch64 -cpu cortex-a57 -machine type=ranchu -m 4096 \
-kernel ./ranchu-kernel/Image -append 'console=ttyAMA0,38400 keep_bootcon' -monitor stdio \
-initrd ranchu-build/ramdisk.img -drive index=2,id=userdata,file=ranchu-build/userdata.img \
-device virtio-blk-device,drive=userdata -device virtio-blk-device,drive=cache \
-drive index=1,id=cache,file=ranchu-build/cache.img -device virtio-blk-device,drive=system \
-drive index=0,id=system,file=ranchu-build/system.img -netdev user,id=mynet \
-device virtio-net-device,netdev=mynet -show-cursor

That’s the output I get:

QEMU 2.0.50 monitor - type 'help' for more information
(qemu) adb_server_notify: Failed to establish connection to ADB server
console on port 5554, ADB on port 5555
VNC server running on `'

So it’s quite possible there’s a problem with adb, but Google did not help, and I failed to go further. More detailed instructions will soon be posted in Linaro Wiki, so I may be able to find out where I made a mistake once it’s posted.

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter

More Technical Details & Benchmarks about Nvidia Tegra K1 “Denver” 64-bit ARM SoC

August 12th, 2014 1 comment

The 32-bit version of Nvidia Tegra K1 have generally received good reviews in terms of performance, especially GPU performance, and the company has also provided good developer’s documentation and Linux support, including open source drivers for the Kepler GPU (GK20A) found in the SoC. But as initially announced, Tegra K1 with also get a 64-bit ARM version codenamed “Denver”, and Nvidia provided more details at Hotchips conference.

The 64-bit Tegra K1 will still feature a 192-core Kepler GPU, but replace the four ARM Cortex A15 cores found in the 32-bit version, by two ARMv8 “Project Denver” cores custom-designed by Nvidia. The multi-core performance of the dual core 64-bit Tegra K1 @ 2.5 GHz may end up being equivalent to the quad core 32-bit Tegra K1 @ 2.1 GHz, but the single core performance will be much better thanks to a  a 7-way superscalar microarchitecture (vs 3-way for Cortex A15), as well as 128KB L1 instruction cache, 64KB  L1 data cache, and a 2MB L2 cache.

To further improve performance, Nvidia implemented a new technique called “Dynamic Code Optimization” that optimized frequently used routines into “tuned microcode-equivalent routine”, and store then in a 128MB dedicated optimization cache in the main memory. The software is done by software the first time, as the optimization overhead is said to be outweighed by the performance gains due to optimized code. Dynamic Code Optimization works with all standard ARM-based applications, requiring no customization from developers, and without added power consumption versus other ARM mobile processors.

Adding new low latency power-state transitions (CC4 Cluster retention), extensive power-gating and dynamic voltage and clock scaling based on workloads, Nvidia claims their dual core 64-bit Tegra K1 processor will outperform existing quad and octa core processor on most mobile workload, and it should even rival mainstream PC-class CPUs with much lower power consumption. You can find some benchmark results below comparing Tegra K1 32-bit performance to Tegra K1 Denver, Celeron N2910 (Bay Trail), Apple A7, Qualcomm Krait-400, and Haswell Celeron 2955U.

Nvidia Tegra K1 64-bit Benchmarks Against Competition (Click to Enklarge)

Nvidia Tegra K1 64-bit Benchmarks Against Competition (Click to Enlarge)

Another good news is that Denver will be pin-to-pin compatible with the original Tegra K1, which should make it pretty easy for OEMs to upgrade their products. Nvidia is currently working on Android “L” the 64-bit Tegra K1, and products should be available by the end of the year. You can find more details on a white paper called “NVIDIA Charts Its Own Path to ARMv8

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter

Linux 3.16 Released

August 6th, 2014 3 comments

Linus Torvalds announced the release of Linux Kernel 3.16 over the week-end:

So nothing particularly exciting happened this week, and 3.16 is out there.

And as usual (previous release being the exception) that means that the merge window for 3.17 is obviously open. And for the third time in a row, the timing sucks for me, as I have travel coming up the second week of the merge window. Many other core developers will be traveling too, since it’s just before the kernel summit in Chicago.

So we’ll see how the next merge window goes, but I’m not going to worry about it overmuch. If I end up not having time to do all the merges, I might delay things into the week of the kernel summit, but I’ll hope to get most of the big merging done this upcoming week before any travel takes place, so maybe it won’t come to that. So this is just a heads-up that the merge window *might* be extended.

Anyway, back to the changes since -rc7: it’s really fairly small stuff randomly all over, with a third being architecture updates, a third drivers, and a third “misc” (mainly mm and networking). The architecture stuff is small ARM updates (mostly DT), some x86 Xen fixups, some random small powerpc things. The shortlog gives a good idea of what kind of stuff it all is, but it’s really just 83 commits (plus merges and the release commit) and about a third of them are marked for stable.

So while 3.16 looked a bit iffy for a while, things cleared up nicely, and there was no reason to do extra release candidates like I feared just a couple of weeks ago.


Kernel 3.15 brought various file systems improvements, faster resume from suspend, etc… Some of Linux 3.16 main changes include:

  • Various KVM improvements: optimizations, support for migration, and GDB support for s390, little-endian support for POWER8, as well as MIPS improvements.
  • Xen – Virtual network interfaces now have multi-queue support for much better performance.
  • Goldfish virtual platform now has 64-bit support.
  • Hugepage migration has been turned off for all architectures except x86_64 since it is only tested on that architecture and there are bugs for some of the others.
  • Automatic NUMA balancing has been turned off for 32-bit x86. Existing 32-bit NUMA systems are not well supported by the code and the developers did not think the effort to support them would be worthwhile.
  • EFI – The kernel EFI code will now handle Unicode characters, and initial support for ARM64 (aarch64) had been added.
  • NFS – Patches to make loopback NFS mounts work reliably have been merged through the NFS tree.  External data representation (XDR) handling in NFS has been reworked to support access control lists (ACLs) larger than 4KB. It also returns readdir() results in chunks larger than 4KB giving better performance on large directories.
  • Modules now have the read-only (RO) and no-execute (NX) bits set on their data sections much earlier in the loading process, before parsing any module arguments. This will further reduce the time window in which a misbehaving (or malicious) module can modify or execute its data.
  • Support for TCP fast open over IPv6 has been added.
  • Support for busy polling on stream control transmission protocol (SCTP) sockets has been added. Busy polling is set on a socket using the SO_BUSY_POLL socket option; it can reduce the latency of receives on high-traffic interfaces that support the option.

New features and improvements specific to the ARM architecture include:

  • AllWinner – All platforms: AXP20x PMIC and MMC support, 5 drivers + SMP reworked for AllWinner A31, touchscreen drivers for AllWinner A10. DTS added for Mele M9 and R7. You can read details about AllWinner changes here.
  • Rockchip – RK3xxx SoC I2C drivers
  • Xen on ARM systems now supports suspend and resume.
  • Hibernation support added for ARM targets.
  • Initial support for ARM64 (aarch64) had been added
  • SMP support has been added for Marvell Armada 375 and 38x SoCs. SMP has been reworked for the Allwinner A31 SoC.
  • New ARM SoC added: ST Microelectronics STiH407; Freescale i.MX6SX; Samsung Exynos 3250, 5260, 5410, 5420, and 5800; and LSI Axxia AXM55xx.
  • Nouveau driver has initial support for NVIDIA Tegra K1 GK20A devices.
  • Various changes for Atmel AT91, Marvell Armada, Fresscale i.MX, Samsung Exynos, and TI AM43xx SoCs.

Further details on Linux 3.16 will eventually be available on For more details about ARM changes, remember to also check ARM architecture and drivers sections.

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter

Linaro 14.07 Release with Linux Kernel 3.16 and Android 4.4

August 1st, 2014 No comments

Linaro 14.07 has just been released with Linux Kernel 3.16-rc6 (baseline), Linux Kernel 3.10.50 (LSK), and Android 4.4.4.

This month, Linaro has continued development on Juno 64-bit ARM development board, as well as other member boards from Broadcom (Capri), Qualcomm (IFC6410), Hisilicon D01, Samsung (Arndale / Arndale Octa), etc.. Android have been upgraded to version 4.4.4 with images released for Pandaboard, Arndale, Nexus 10, and Nexus 7, built with Linaro GCC 4.9.

Here are the highlights of this release:

  • Linux Linaro 3.16-rc6-2014.07 released
    • GATOR version 5.18 (same version as in 2014.04)
    • updated basic Capri board support from Broadcom LT. Good progress in upstreaming the code: now the topic has 21 patch on top of v3.16-rc4 vs 53 patches on top of v3.15 in 2014.06 cycle
    • removed cortex-strings-arm64 topic as the code is accepted into the mainline
    • new topic from Qualcomm LT to add IFC6410 board support
    • updated Versatile Express ARM64 support (FVP Base and Foundation models, Juno) from ARM LT. cpufreq support for Juno has been added.
    • updated Versatile Express patches from ARM LT
    • more HiP0x Cortex A15 family updates from HiSilicon LT
    • switched to mainline support for Arndale and Arndale-octa boards
    • updated llvm topic (follows the community llvmlinux-latest branch)
    • Big endian support (the 2014.05 topic version rebased to 3.16 kernel)
    • removed ftrace_audit topic as the code is accepted into the mainline
    • config fragments changes – added ifc6410.conf
  • Linaro Toolchain Binaries 2014.07 released – Based on GCC 4.9 and updated to latest Linaro TCWG releases:  Linaro GCC 4.9-2014.07 & Linaro binutils 2.24.0-2014.07
  • Linaro Android 14.07 released
    • built with Linaro GCC 4.9-2014.07
    • Pandaboard, Arndale, Nexus 10, Nexus 7 upgraded to Android 4.4.4.
    • LSK Engineering build moved back to 4.4.2.
    • Android LSK v3.14 CI loop added
  • Linaro OpenEmbedded 2014.07
    • Integrated Linaro GCC 4.9-2014.07
    • Integrated Linaro EGLIBC 2.19-2014.07
    • Integrated Linaro binutils 2.24.0-2014.07
    • Upstreaming:
      • fixes recipes related to oe-core autotools update
      • cleaned up overlayed recipes
      • updated PM QA to 0.4.12
  • Linaro Ubuntu 14.07 released
    • added gstreamer 1.0
    • updated packages: ARM trusted firmware (support latest FVP models), PM QA (0.4.12), LSK 3.10.49/3.14.13 and linux-linaro 3.16-rc6 kernels.
  • Integrate ARMv8 Big endian systems into LAVA and CI
  • Migrate Linaro Android builds to 4.9 Linaro toolchain
  • LSK: add ARMv8 kernel + arm32 rootfs CI loop
  • Package rt-app
  • LSK: enable member kernel configs for build testing

You can visit for a list of known issues, and further release details about the LEB, LMB (Linaro Member Builds), and community builds, as well as Android, Kernel, Graphics, Multimedia, Landing Team, Platform, Power management and Toolchain components.

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter

ARM and Qualcomm Release a New Guide About 32-bit to 64-bit SoCs

July 30th, 2014 1 comment

ARM and Qualcomm have been pretty successful with ARMv7 SoCs in the mobile space in recent years, and while 32-bit ARM (Aarch32) processors certainly have a few more years, both companies are now moving to 64-bit ARM (Aarch64 / ARMv8), and they released a document showing what has been achieved with ARMv7, the differences between ARMv7 and ARMv8, and new capabilities that will be attainable with 64-bit processing.

Aarch32 vs Aarch64

Aarch32 vs Aarch64

The document covers the following:

ARM vs x86 vs Architecture Indepent Code for 100 Top Apps in Google Play (US)

ARM vs x86 vs Architecture Independent Code for 100 Top Apps in Google Play (US)

  • Introduction
  • ARM Business Model
  • The Mobile Computing Revolution (Tablets replacing Laptops)
  • Android on ARMv7-A and ARMv8-A
  • ARMv8-A Architecture
  • Backward Compatibility to ARMv7-A
  • ARM Cortex A-53 and Cortex-A57
  • ARM big.LITTLE Technology
  • The Transition to the ARMv8-A Architecture (Fast Models, Tools, Linaro…)
  • Qualcomm Technologies: Transitioning to 64-Bit with Integrated Mobile Design
  • Custom and ARM Designed Processors: The Right Technology to Any Market
  • Multiple Foundries, Flexible Production
  • Flexible design practices in action (Performance, price point, development time. Snapdragon 410 vs 610 vs 810)
  • Conclusion

Both companies clearly promote their respective products via this document, but there are lots of interesting details such as Intel vs ARM optimized apps in Google Play, perfomance of A57 vs A15, A53 vs A7, side-by-side comparison between 32-bit and 64-bit ARM architectures, and so on. If you want to get the details, you can download the 20-page presentation entitled “ARM and Qualcomm- Enabling the Next Mobile Computing Revolution with Highly Integrated ARMv8-A based SoCs“.

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter

ARM TechCon 2014 Schedule – 64-Bit, IoT, Optimization & Debugging, Security and More

July 23rd, 2014 No comments

ARM Technology Conference (TechCon) 2014 will take place on October 1 – 3, 2014, in Santa Clara, and as every year, there will be a conference with various sessions for suitable engineers and managers, as well as an exposition where companies showcase their latest ARM based products and solutions. The detailed schedule for the conference has just been made available. Last year,  there were 90 sessions organized into 15 tracks, but this year, despite received 300 applications,  the organizers decided to scale it down a bit, and there will be 75 session in the following 11 tracks:ARM_TechCon_2014

  • Chip Implementation
  • Debugging
  • Graphics
  • Heterogeneous Compute
  • New Frontiers
  • Power Efficiency
  • Safety and Security
  • Software Development and Optimization
  • Software Optimization for Infrastructure and Cloud
  • System Design
  • Verification

There are also some paid workshops that take all day with topics such as “Android (NDK) and ARM overview”, “ARM and the Internet of Things”, or “ARM Accredited Engineer Programs”.

As usual, I’ve gone through the schedule builder, and come up with some interesting sessions with my virtual schedule during the 3-day event:

Wednesday – 1st of October

In this session, Dr. Saied Tehrani will discuss how Spansion’s approach to utilize the ARM Cortex-R line of processors to deliver energy efficient solutions for the automotive MCU market has led the company to become a vital part of the movement toward connectivity in cars. Beginning with an overview of the auto industry’s innovation and growth in connected car features, he will explain how these systems require high performance processing to give drivers the fluid experience they expect. Highlights in security and reliability with ARM Cortex-R, including Spansion’s Traveo Family of MCU’s will also be presented.

HEVC and VP9 are the latest video compression standards that significantly improves compression ratio compared to its widely used predecessors H.264 and VP8 standard. In this session the following will be discussed:

  • The market need for GPU accelerated HEVC and VP9 decoders
  • Challenges involved in offloading video decoding algorithms to a GPU, and how Mali GPU is well suited to tackle them
  • Improvement in power consumption and performance of Mali GPU accelerated decoder
  • big.LITTLE architecture and CCI/CCN’s complementing roles in improving the GPU accelerated video decoder’s power consumption

ARM’s Cortex-M family of embedded processors are delivering energy-efficient, highly responsive solutions in a wide variety of application areas right from the lowest-power, general-purpose microcontrollers to specialised devices in advanced SoC designs. This talk will examine how ARM plans to grow the ARM Cortex-M processor family to provide high performance together with flexible memory systems, whilst still maintaining the low-power, low-latency characteristics of ARM’s architecture v7M.

IoT devices as embedded systems cover a large range of devices from low-power, low-performance sensors to high-end gateways. This presentation will highlight the elements an embedded engineer needs to analyse before selecting the MCU for his design. Software is fundamental in IoT: from networking to power management, from vertical market protocols to IoT Cloud protocols and services, from programming languages to remote firmware update, these are all design criteria influencing an IoT device design. Several challenges specific to IoT design will be addressed:

  • Code size and RAM requirements for the major networking stacks
  • Optimizing TCP/IP resources versus performance
  • Using Java from Oracle or from other vendors versus C
  • WiFi (radio only or integrated module)
  • Bluetooth (Classis versus LE) IoT protocols

Thursday – 2nd of October

Amongst ARM’s IP portfolio we have CPUs, GPUs, video engines and display processors, together with fabric interconnect and POP IP, all co-designed, co-verified and co-optimized to produce energy-efficient implementations. In this talk, we will present some of the innovations ARM has introduced to reduce memory bandwidth and system power, both in the IP blocks themselves and the interactions between them, and how this strategy now extends to the new ARM Mali display processors.

Designing a system that has to run on coin cells? There’s little accurate information available about how these batteries behave in systems that spend most of their time sleeping. This class will give design guidance on the batteries, plus examine the many other places power leakages occur, and offer some mitigation strategies.

64-bit is the “new black” across the electronics industry, from server to mobile devices. So if you are building or considering building an ARMv8-A SoC, you shall attend this talk to either check that you know everything or find out what you shall know! Using the ARMv8 Juno ARM Development Platform (ADP) as reference, this session will cover:

  • The ARMv8-A hardware compute subsystem architecture for Cortex-A57, Cortex-A53 & Mali based SoC
  • The associated ARMv8-A software stack
  • The resources available to 64-bit software developers
  • Demonstration of the Android Open Source Project for ARMv8 running on Juno.

Rapid prototyping platforms have become a standard path to develop initial design concepts. They provide an easy-to-use interface with a minimal learning curve and allow ideas to flourish and quickly become reality. Transitioning from a simple, easy-to-use rapid prototyping system can be daunting, but shouldn’t be. This session presents options for starting with mbed as a prototyping environment and moving to full production with the use of development hardware, the open-source mbed SDK and HDK, and the rich ARM ecosystem of hardware and software tools.Attendees will learn how to move from the mbed online prototyping environment to full production software, including:

  • Exporting from mbed to a professional IDE
  • Full run-time control with debugging capabilities
  • Leveraging an expanded SDK with a wider range of integration points
  • Portability of applications from an mbed-enabled HDK to your custom hardware

Statistics is often perceived as scary and dull… but not when you apply it to optimizing your code! You can learn so much about your system and your application by using relatively simple techniques that there’s no excuse not to know them.This presentation will use no slides but will step through a fun and engaging demo of progressively optimizing OpenCL applications on a ARM-powered Chromebook using IPython. Highlights will include analyzing performance counters using radar diagrams, reducing performance variability by optimizing for caches and predicting which program transformations will make a real difference before actually implementing them.

Friday – 3rd of October

The proliferation of mobile devices has led to the need of squeezing every last micro-amp-hour out of batteries. Minimizing the energy profile of a micro-controller is not always straight forward. A combination of sleep modes, peripheral control and other techniques can be used to maximize battery life. In this session, strategies for optimizing micro-controller energy profiles will be examined which will extend battery life while maintaining the integrity of the system. The techniques will be demonstrated on an ARM Cortex-M processor, and include a combination of power modes, software architecture design techniques and various tips and tricks that reduce the energy profile.

One of the obstacles to IoT market growth is guaranteeing interoperability between devices and services . Today, most solutions address applications requirements for specific verticals in isolation from others. Overcoming this shortcoming requires adoption of open standards for data communication, security and device management. Economics, scalability and usability demand a platform that can be used across multiple applications and verticals. This talk covers some of the key standards like constrained application protocol (CoAP), OMA Lightweight M2M and 6LoWPAN. The key features of these standards like Caching Proxy, Eventing, Grouping, Security and Web Resource Model for creating efficient, secure, and open standards based IoT systems will also be discussed.

Virtual Prototypes are gaining widespread acceptance as a strategy for developing and debugging software removing the dependence on the availability of hardware. In this session we will explore how a virtual prototype can be used productively for software debug. We will explain the interfaces that exist for debugging and tracing activity in the virtual prototype, how these are used to attach debug and analysis tools and how these differ from (and improve upon) equivalent hardware capabilities. We will look in depth at strategies for debug and trace and how to leverage the advantages that the virtual environment offers. The presentation will further explore how the virtual prototype connects to hardware simulators to provide cross-domain (hardware and software) debug. The techniques will be illustrated through case studies garnered from experiences working with partners on projects over the last few years.

Attendees will learn:

  • How to set up a Virtual Prototype for debug and trace
  • Connecting debuggers and other analysis tools.
  • Strategies for productive debug of software in a virtual prototype.
  • How to setup trace on a virtual platform, and analysing the results.
  • Hardware in the loop: cross domain debug.
  • Use of Python to control the simulation and trace interfaces for a virtual platform.
  • 14:30 – 15:20 – GPGPU on ARM Systems by Michael Anderson, Chief Scientist, The PTR Group, Inc.

ARM platforms are increasingly coupled with high-performance Graphics Processor Units (GPUs). However the GPU can do more than just render graphics, Today’s GPUs are highly-integrated multi-core processors in their own right and are capable of much more than updating the display. In this session, we will discuss the rationale for harnessing GPUs as compute engines and their implementations. We’ll examine Nvidia’s CUDA, OpenCL and RenderScript as a means to incorporate high-performance computing into low power draw platforms. This session will include some demonstrations of various applications that can leverage the general-purpose GPU compute approach.

Abstract currently not available.

That’s 14 sessions out of the 75 available, and you can make your own schedule depending on your interests with the schedule builder.

In order to attend ARM TechCon 2014, you can register online, although you could always show up and pay the regular on-site, but it will cost you, or your company, extra.

Super Early Bird Rare
Ended June 27
Early Bird Rate
Ends August 8
Advanced Rate
Ends September 19
Regular Rate
VIP $999 $1,299 $1,499 $1,699
All-Access $799 $999 $1,199 $1,399
General Admission $699 $899 $1,099 $1,299
AAE Training $249 $299 $349 $399
Software Developers Workshop $99 $149 $199 $249
Expo FREE FREE $29 $59

There are more types of pass this year, but the 2-day and 1-day pass have gone out of the window. The expo pass used to be free at any time, but this year, you need to register before August 8. VIP and All-access provides access to all events, General Admission excludes AAE workshops and software developer workshops, AAE Training and Software Developers Workshop passes give access to the expo plus specific workshops. Further discounts are available for groups, up to 30% discount.

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter