Tech Coffee: Azure Digital Twins + HoloLens: Powering the Next Generation of IoT

TechCoffeeLogoTech Coffee is a series that aims to introduce you to a new technology in less than 20 minutes. In this post, we will get an introduction to digital twins, and some of the terms and technologies needed to create them. So grab a cup of coffee and get started!


Digital Twin?
There is a lot of discussions and talks around digital twin solutions, where fleets of devices produce a massive amount of data, all communicated to the cloud for analysis and processing. This processed data is then consumed by interfaces such as digital twins, graph solutions, web sites and so on to present the real-time state of a device, or a result of a prediction created by a Machine Learning algorithm.

Let’s first spend two minutes to get a short introduction to what a digital twin really is:

These solutions consists of a large set of systems talking with each other, where each is a technology of its own. To name a few, we have topics like IoT, IoT Edge, Cloud, Artificial Intelligence, Machine Learning, Mixed Reality,  Spatial Anchors, all fields deep enough to have its own specialists.

Getting started as well as understanding how all of this fits together can be hard due to this learning curve. Going from a small IoT device, through the cloud, and rendered on a mixed reality device such as HoloLens involves a lot of technological layers.

The following session from Microsoft Builds 2019 goes through some of these concepts, and explains how the world of IoT and the world of Mixed Reality can be brought together.

Azure Digital Twins + HoloLens: Powering the Next Generation of IoT


Why all the 3D?
3D is a powerful asset when it comes to visualizing data. If we get back to real life, we are used to see and work with items and objects from multiple angels, and get a feel for it. When working with a machine, we know how it looks, and how it feels.

Modern hardware such as mobile devices and PC’s can easily render advanced 3D models and environments, and if designed in the correct way, we can start to create brand new interfaces and ways of communicating with software. Suddenly we can bring the real asset into a digital world, where sensors and data are connected to it, so it renders and behaves much like in the real world. Thus creating a digital shadow of a real asset.

Now, if we apply computation and processing through the cloud, and machine learning algorithms to predict when things needs attention, or can give us a real-time production rate, we can start to both handle situations before they happen, and simulate fictional situations that might happen in the real world.

Posted in Graphics, HoloLens, Mixed Reality | Tagged , , , | Leave a comment

Apollo 11 HoloLens 2 Demo from Microsoft Build 2019

The Microsoft Build Conference just took place in Seattle, and I will be creating a couple of blog posts regarding content from Build 2019, but first, I’d like to share this little demo that was presented.
In this video, Andy and John shares how technology and history can be combined to recreate historic events digitally. The cutting edge of technology in 1969 meets the cutting edge of technology in 2019.

In this demo, the Apollo Saturn V and the Lunar Module is beautifully rendered with an amazing set of details, and interacted with through the HoloLens, digitally recreating the historic lunar landing.

Unreal Engine is about to get native support for HoloLens 2 development by the end of this month:
Creative communities across entertainment, visualization, design, manufacturing, and education eagerly anticipate Unreal Engine 4 native support for HoloLens 2, which Epic has confirmed will be released by the end of May. Originally intended as a stage demo for Microsoft Build, the Unreal Engine team unveiled a remarkable interactive visualization of the Apollo 11 lunar landing, which celebrates its 50th anniversary this year. This is a recording of a live rehearsal taking place on May 5, 2019.


Mixed Reality and Unreal
In the short session below, Ryan Vance from Epic Games and Jackson Fields from Microsoft Mixed Reality talks more about the Unreal Engine support for HoloLens 2 and Windows Mixed Reality headsets. The sessions also goes more into details about the demo itself, and how things were set up.


If you wish to dive deeper into the tech itself, and how to get started making Mixed Reality apps using UE, this session will walk you through everything you need.



Reentry – An Orbital Simulator
The demo presented at Microsoft Build 2019 shows the Apollo rocket from an external perspective, and recreates some if its important stages. I have been working on another project, where I use a game engine to create a simulator, much like Flight Simulator, where you fly and operate these spacecrafts using similar procedures and checklists that was used by the astronauts.
You can read more about this project at


Posted in Graphics, HoloLens, Mixed Reality | Tagged | Leave a comment

Reentry – An Orbital Simulator available on Steam


My space flight simulator Reentry – An Orbital Simulator is available as Early Access through Steam, and can be purchased on the following link:

The game is a realistic space flight simulator based on NASAs space programs; from the first American human spaceflight in Project Mercury, the rendezvous and EVAs of Project Gemini to the Moon landing in Project Apollo.

NASAs early spacecrafts

The Apollo Command Module in Reentry, the spacecraft that flew to the Moon

The simulator puts you in control of NASAs early spacecrafts, where a rich, interactive and functional virtual cockpit modeled and implemented after the official NASA manuals let’s you operate and pilot the spacecraft using similar procedures to what the real astronauts used.

The Gemini Virtual Cockpit panels – considered the bridge to the Moon

Each of the spacecrafts has almost every switch implemented and connected to an underlying system that is used to operate the spacecraft itself. This includes fuses, computers, the electrical system, the environmental control systems, attitude control and so on.

The Mercury Virtual Cockpit panels – NASAs first spaceship

Exceptional views of our oasis in the Universe
The Earth is made out of very high resolution textures, allowing you to see, explore, observe and enjoy the views of Earth from space. Launching into orbit around Earth will let you observe the color change by flying coast to coast over Africa, see mountain ranges, lakes, cities and everything else visible from the orbital altitudes.

A Study Level simulation

The game comes with a set of missions, and mission editors designed to teach you how to operate these highly complex machines. The Academy will take you through the concepts and each spacecraft comes with a lengthy flight manual so you can start studying the spacecrafts.

Purchase and support the development of Reentry!

You can purchase and download the Early Access right now by following this link:


Thank you for your support! 🙂

Posted in Uncategorized | 2 Comments

Project Apollo for the Reentry Space Simulator (UWP)


A quick update from the development of Project Apollo for Reentry. For those of you who are unfamiliar with my project, Reentry is a Windows 10 UWP app that lets you fly the Mercury, Gemini and soon the Apollo spacecraft’s from NASA’s earlier space programs. The purpose is to give you a realistic feeling of how it was to be an astronaut in these machines in full 3D. You are able to follow the real checklists the astronauts used, and study the space crafts using the real manuals created by NASA.

In addition, the simulator comes with an in-game academy, as well as the game manuals found here:

In this post I wish to give you a state if the project, how things fit together and how it looks!


Before visiting Apollo, let me show you what you currently have access to.

The game is in Technical Preview II, and new updates and modules are just submitted as updates. Your installation will stay up-to-date once they roll out.

Technical Preview II gives you access to both the Mercury and the Gemini spacecraft. These are still WIP, but in a state that allows you to perform complex maneuvers in space, as well as follow real checklists and so on. They also comes with some missions you can do, however the mission system is not final.

You can find some videos at

Mercury cockpit

Gemini cockpit

Mercury-Atlas launching


The Apollo Module has been my main goal since I started working on this project, but I wanted to start with the basics and also where NASA started (given the complexity of the entire Apollo program). Mercury and Gemini has been two long projects, a total of 3 years. This has given me some good insights into the how the technology of the space program was developed, astrophysics/orbital mechanics and also how I can deal with some of the mistakes I have made during development.

This section will give you an overview of the current state of the project, from a development perspective.


Both the Mercury and the Gemini module for Reentry was based on my physics engine named GeoGravity to enable Orbital Mechanics around Earth. The first major change for Apollo is that I’m now working with a PhD. in astrophysics on combining my engine with his engine to solve a few things:
1) Stability
2) Going to the Moon
3) More accurate and realistic math

I learned a lot from implementing the two first iterations of my physics engine and how to handle scale and double precision in Unity, and with this new engine, these leanings are all incorporated to give the sim more flexibility and better graphics.

In the screenshot above, the calculations required to fly to the Moon is being tested in an isolated environment/completely different project. Once it’s right I will merge it into the Reentry project and start working on the TLI logic in Apollo.


Project Apollo is far from complete, but a version will soon be released into the Technical Preview II found in Windows Store.

Most of the major components of the panel is complete, including both the model and using it as an interface to the mechanics under the hood.



In the screenshot above you can see that a lot of the switches are already in place. Each is connected logically to internal systems. The missing switches are not yet implemented. What you see is the commanders view (left seat) and the controls for both the primary navigation and control systems, as well as the backup Spacecraft Control System.


The above is the lunar module pilots view, and contains the controls for the Electrical Power System, the Service Propulsion System and the Fuel Cells. It also contains the controls for communication.


The center seat is for the Command Module Pilot, and contains the controls for the computer, the Service Module thrusters, joysticks for both rotational and translational maneuvers, the Cautions and Warnings panel, Mission Timer and the Environmental Control Systems. The hole in the middle is the entry to the Lunar Module itself, once docked with it.

Again, as you can see, most of the switches are not yet in place when it comes to the Environmental Control System and the Communication Systems.


With the exception of a stable guidance computer (LVDC), the launch sequence is working. The computer can run programs and works as the interface between the astronauts and the primary navigation and control system.

(Yes, the texture is wrong on keypad number 7)

The computer does not run the real emulator of the Apollo Command Module Computer (due to licensing), however I’m working on an implementation that replicates a lot of its functionality and behavior. All the programs for prep and launch has been implemented.


The launch sequence kicks in once the countdown reaches zero and ignition is triggered. The rocket ascends into low-Earth parking orbit, where it will orbit Earth until the Trans-Lunar Injection burn.



Apollo comes with two independent systems for controlling the spacecraft. The first one is the Primary Guidance, Navigation and Control System, and the backup is the Spacecraft Control System. Basically, the PGN&C system is controlled automatically and/or through the Command Module Computer, while the SCS is controlled manually through switch-configurations on the panels.

Most of these systems has been implemented with exception of the Flight Director and Attitude Indicator (FDAI). The FDAI is functional, but I have not yet implemented the correct rotation it drives to based on the attitude of the spacecraft relative to a stable platform.



Each of the thrusters/quads can be configured/enabled independently with both circuit breakers and switches in the cockpit.



Apollo contains a lot of electronics. From the Saturn V to the Service Module, to the Command Module. These are controlled through circuit breakers, switches and automatic systems. The spacecraft is only connected to external power while on the launch pad. Once the umbilical disconnects, it runs on internal power sources.

These power sources are both battery powered (backup) and Fuel Cell powered (primary). The Fuel Cells are located in the Service Module and is disconnected before reentry. Both of these power sources are able to generate both DC and AC power. AC is powered through Inverters. You have a lot of control when it comes to the electric system, and it’s one of the most important systems to learn and pay attention to.


The command module comes with a lot of internal and external lights. You are able to configure what lights are powered, and their dim-levels. The panels are illuminated, as well as any digit you see.




As you can see from the screenshots above, the panel can be configured independently. You have three light control panels, so you can create a dark atmosphere, or a bright one, or as in the last screenshot configure one side to be bright and the other to be dark.


The SPS is the main engine, often referenced to as The Engine. It is what will alter your orbits (delta-v) after the Launch Vehicle is separated, and most importantly take you home after being in Lunar Orbit. It’s basically the bell shaped engine on the Service Module.


The engine needs to be gimbaled to keep the thrust balanced based on the center of gravity, as well as balanced when it comes to propulsion. This can be controlled on the panel as well.

Using the control panel to the left, you can control the balance of the oxidizer and the fuel. Also, on panel 1 there is a gimbal panel that enables you to gimbal the SPS on the Pitch and Yaw axis.

The SPS is also re-ignitable, meaning you can ignite it multiple times. Once the fuel and oxidizer levels start to get in the mid-range, the engine needs a forward thrust before ignition to make sure the fuel is in the correct spot inside the fuel tanks. Translational thrusters are used to do this.

The SPS is an important piece of equipment onboard, so it’s good to know how it works. Luckily, it’s mostly automatic. Winking smile


From the cockpit of a virtual Apollo Command Module, I wish to thank you for your time!

If you want to follow the project and updates, feel free to join my Facebook page for the project:

This is the first time I share these details about the new module for Reentry. I hope you found it interesting, and feel free to reach out with any questions! Lastly,  happy international women’s day to everyone out there!


Posted in Game programming | 1 Comment

XNA Shader Programming source now on GitHub

As with the Commodore 64 programming tutorial series, I have now moved all the source from my XNA Shader Programming tutorial series to GitHub.

The XNA Shader Programming series goes through the theory and the HLSL implementation of various effects and concepts. Even though XNA is old, the shaders still look the same, and by following the guides, you should easily be able to implement these in Unity, DirectX, OpenGL and so on.

You can find the repo here, with links to the tutorial articles as well.


Note: If you are at GDC 17, let me know -hope to meet some of my readers there! Smile

Posted in Math, Shaders, XNA, XNA Shader Tutorial | Leave a comment

Commodore 64 Assembly Programming on Windows


In 2011, I wrote a tutorial on how to program for the Commodore 64 on Windows. Today, I revisited the entire tutorial series, made changes and published the source on GitHub.

I know the old posts had a lot of dead links due to changes in how OneDrive was hosting the files, but this problem is now gone – you can find everything related to each tutorial hosted on GitHub. Also, due to some formatting issues that happened when you copied the code listings from the posts, I uploaded the individual code listings using the correct formatting, each with a compiled .prg file so you can easily run and see how it should look when stuck.

Thanks for all the feedback on this tutorial, it’s one of my most read series.

You can find the repo here with links to the individual posts:


Posted in Commodore 64 | 5 Comments

Project Gemini coming soon to my space simulator ReEntry!

Just uploaded the first gameplay video from my space simulator ReEntry! This is still an early preview, but it showcases a lot of functionality.

In the video, you will see the launch from the astronauts perspective. You need to flip switches to power various systems, configure the spacecraft for launch and monitor the instruments during ascent.

Once in Orbit, I use the OnBoard Computer to configure the orbital parameters to enter a circular’ish orbit, before setting it up for rendezvous and burn to reach the target satellite named Agena. Once close to Agena, you can use the radar and the encoder to communicate with it to turn it on, configure lights and so on before docking with it.

All switches are functional, but a lot of polish and tweaking is needed.

The goal of this simulator is to make you learn how the Mercury and Gemini spacecraft’s worked, the technology used to reach orbit and rendezvous. To fly this, you can use the real manuals and checklists provided by NASA, or use the in-game academy.

Hope you will enjoy this little video!

Posted in Game programming, Unity, UWP | 1 Comment

How to develop augmented reality apps with Vuforia for Windows 10

I recently published an article on developing Augmented Reality apps using Vuforia on the Windows Developer blog and thought I’d share it with you. Smile

“Augmented Reality is a way to connect virtual objects with the real world, making it possible to naturally interact with them by use of mobile devices like phones, tablets or new mixed reality devices like HoloLens.

Vuforia is one of the most popular Augmented Reality platforms for developers, and Microsoft partnered with Vuforia to bring their application to the Universal Windows Platform (UWP).

Today, we will show you how to create a new Unity project and develop a real AR experience from scratch for devices running Windows 10.


You can download the source for this application here, but I encourage you to follow the steps and build this yourself.”

You can find all the steps and the full article at the Windows Blog:

Happy Holidays! Smile

Posted in Augmented Reality, Graphics, Unity, Vuforia, Windows 10 | 1 Comment

Practical DirectX 12 – #5: Getting started with the DirectX Math API (Win32 + UWP)


Welcome back to the DirectX 12 Programming series!

DirectX 12 has been out for about a year now and is getting very mature. Developers around the world has started adapting to DX12, and porting their engines to it. Even Unity and Unreal Engine are working on getting Direct 12 support.

DirectX 12 brings a lot of new features, and it takes you much closer to the metal than ever before. This means that you will get better performance and room to do a lot of neat stuff, but it also gives you more responsibility when it comes to handling the low level stuff.

Last year I wrote a series of articles covering the basics of DirectX 12 in a theoretical approach. This article continues from this series, but from now on, I want you to get your hands dirty, and start writing DirectX 12 enabled C++ apps – hence the change of the tutorial series!

This article will get you started with DirectX Math, a library that has been around since Windows 8 and is heavily used in DirectX 12 applications. To get you started, we will do the following:
1) Know how to add and enable DirectX Math in you applications
2) Write a simple C++ Win32 Console application that covers the basic operations of DirectX Math.
3) Use the above to write a more complex DirectX Math enabled Windows 10 app. A Windows 10 app can be submitted and distributed through the Windows Store

Here is a quick screenshot that shows what we will end up with today. Its a Windows 10 app that is using something called Ink to draw vectors by hand, and then convert this as input to out application for vector operations.

All the source for these tutorials can be found @ GitHub:

What is the DirectX Math API?

The DirectX Math API is a library of types and functions for math commonly used in graphical applications like linear algebra, vector operations and matrices. It’s a part of the Windows SDK so you don’t need to download and install anything, and can be used in any applications, both games and traditional apps, from Windows 8 and up.

Its based on the SSE2 (Streaming SIMD Extension 2) instruction set with 128-bit wide SIMD registers. This means that it can operate on four 32 bit (32×4=128) floats or integers with only one instruction.

For example, if you want to add two vectors A and B together:

A + B = (Ax+Bx, Ay+By, Az+Bz, Aw + Bw)

With SIMD (Single Instruction Multiple Data), this can be done by a single instruction rather than multiple scalar instructions.

The library provides an implementation using high-performance SSE/SSE2 intrinsics, an implementaion using ARM-NEON intrinsics for the ARM platform, and an implementation that doesn’t use intrinsics.


How do I use DirectX Math in my own applications?

Since it’s a part of the Windows SDK, you can use it by simply including its header files. The main header file is <DirectXMath.h> but depending on what you want, there are others you can include too.

The other header files are:
– DirectXPackedVector.h
– DirectXColors.h
– DirectXCollision.h

Simply include these where needed.

Writing a simple DirectX Math enabled Win32 console application

For the first demo we will be writing, we will simply create a normal Win32 console application using Visual Studio Community 2015 (free).

This tutorial series will be hands-on, so you will need to follow along! Smile If you don’t have Visual Studio installed, go ahead and download+install it now.

Let’s jump right in!

1) Create a new project
Launch Visual Studio and hit File->New->Project…

2) Win32 Console App Template
Select C++ as the language, Win32 category and the Win32 Console Application, give it a name you like. Can be anything. Smile Hit OK when ready.

Click Next on the popup dialogue, and Finish on page 2 leaving all the default values (for simplicity – you can change it if you want)

3) Win32 Console project generated
Once done, you should have a simple Win32 console project created.

4) Including the DirectX Math API
In our main file (in my case, DX12Tutorial1DirectXMath.cpp), we need to add an include to DiretXMath.h. Add the following line of code:
#include <DirectXMath.h>


And that’s all it takes to add the DirectX Math API to your application. We will get back to this demo in the section below, so don’t close it.

A quick primer on DirectX Math

So, now that you know how to add the APIs to your project it’s time to learn how it works.

You will mostly use DirectX math for Vector operations, Matrix operations and transformations.

Starting with Vector operations!

The main Vector type in the DirectX Math APIs is XMVECTOR, and can take 4 floats or ints. This type is using the SIMD registers to store its data, and is using SSE2 if available.

XMVECTOR a = XMVectorSet(1.0f, 0.0f, 0.0f, 0.0f);

The XMVectorSet function takes these values, and creates an XMVECTOR using the given values.

However, it’s normal to use another type for class members, and convert it to XMVECTOR when needed. XMVECTOR is always 4D, meaning that it got 4 floats and ints no matter what. If you make a 2D game, you can still use the XMVECTOR, setting the unused values to zero. The other type is called XMFLOATn where n is how many numbers you need. For example, for a 3D vector, you can use XMFLOAT3, or for a 2D vector, you can use XMFLOAT2.

XMVECTOR is the only type for vector operations that use the SIMD capabilities. If this isn’t important for you, you can go ahead and use XMFLOATn directly in calculations.

Instead, what you want to do is to store the data in XMFLOATs and then convert it to XMVECTOR when doing calculations. The DirectX Math APIs comes with functions that helps you convert between them. This is called the Loading and Storing functions.

To convert from an XMVECTOR to a XMFLOAT3 (a 3D vector), you will need to use the XMStoreFloat3 function. There is one for 2D and one for 4D as well.
For example, if we have a XMVECTOR v and want to convert it to a XMFLOAT3 for storage in a class:
XMStoreFloat3(&d, v);

If we want to go the other way, say we have XMFLOAT3 d from somewhere and we want to use it in an XMVECTOR operation, we need to use XMLoadFloat3:
XMLoadFloat3(&d, v);

Vector operation basics
Once you have your XMVECTORs, calculation is simple. You can add, subtract and do scalar multiplications with them.

1) Adding some vectors
In your example application, define the following 4 vectors by adding these lines:

XMVECTOR a = XMVectorSet(1.0f, 0.0f, 0.0f, 0.0f);
XMVECTOR b = XMVectorSet(0.0f, 2.0f, 0.0f, 0.0f);
XMVECTOR c = XMVectorSet(0.0f, 0.0f, 3.0f, 0.0f);
XMVECTOR d = XMVectorSet(1.0f, 1.0f, 1.0f, 0.0f);

2) Using namespace std and DirectX
I also added two using lines: one for std and one for DirectX namespaces. You should add these too:

using namespace std;
using namespace DirectX;


3) Simple operations

Now, let’s do a few operations on these vectors, add the following lines below the XMVECTOR definitions:

XMVECTOR r1 = a + b;
XMVECTOR r2 = b – d;
XMVECTOR r3 = c * 5.0f;


4) Printing the results to the console
There are many ways of printing to the console. Let’s just go with a very simple one. We need to define two function prototypes just below the using statements:

void PrintVector(XMVECTOR p);
void InputToExit();

Then, below the main function, add these two functions:

void PrintVector(XMVECTOR p)
    XMFLOAT3 c;
    XMStoreFloat3(&c, p);

    cout << “(” << c.x << “, ” << c.y << “, ” << c.z << “)” << endl;

void InputToExit()
    cout << “Press ENTER to quit”;

The PrintVector takes an XMVECTOR, and prints its components. If you are an advanced user, you would look at special types for argument passing, but I don’t want to overload you so we will just keep using XMVECTOR.

And lastly, print out the vectors that contains the results like this:

cout << “a + b = “;

cout << “b – d = “;

cout << “c * 5.0 = “;


The InputToExit() function is there as a simple way for the user to hit ENTER to exit.

Your code should now look something like this:

5) Run!
Compile and run the example by pressing F5 or the play button. The results should look something like this:

More Vector operations
You can also do the normal vector operations like finding the dot product, cross product, angle between vectors, normalize vectors and calculate the length/magnitude ++.

6) The last vector operations for today
Add these lines just before the InputToExit() function

XMVECTOR aDotd = XMVector3Dot(a, d);
XMVECTOR aCrossb = XMVector3Cross(a, b);
XMVECTOR lengthOfd = XMVector3Length(d);
XMVECTOR dNormalized = XMVector3Normalize(d);

cout << “a . d = “;

cout << “a x b = “;

cout << “|d| = “;

cout << “d normalized = “;

Compile and run to see this in action!


Matrix and the Linear Transformations

For the last part of this sample I want to take a quick look at how to do calculations with matrices and linear transformations. It works in a similar fashion as how the vector operations work.

The main matrix type of DirectX Math is XMMATRIX and is representing matrices up to 4×4. In fact, just like XMVECTOR, they are all treated as 4×4 matrices, but the unused columns and rows are just set as zero. The XMMATRIX is really just using 4 XMVECTORS, one for each row and is created by using the XMMatrixSet function or by using one of the constructors.

As with XMVECTOR, the XMMATRIX type is typically used in the global or local scope, as well as during calculations. If you need to store matrices as class members, it is recommended to use XMFLOAT4X4 and then use XMLoadFloat4x4 and XMStoreFloat4x4 to convert the types between XMFloat4x4 and XMMATRIX.

1) Printing matrices
Let’s jump back into our demo application and add two functions that we will use to print our matrices.

Add these two lines below the PrintVector prototype:
void PrintMatrix(XMMATRIX *m);
void PrintMatrixRow(XMVECTOR p);

And define the functions below the PrintVector(..) function in the bottom of the file, but above IntputToExit():
void PrintMatrix(XMMATRIX *m)
    for (int i = 0; i < 4; i++)

void PrintMatrixRow(XMVECTOR p)
    XMFLOAT4 c;
    XMStoreFloat4(&c, p);

    cout << “[” << c.x << “\t” << c.y << “\t” << c.z << “\t” << c.w << “]” << endl;

These functions are used to write the contents of a matrix out to the console. Notice that we simply extract the XMVECTOR part of the matrices, and then print out each component of it. We could have used the other PrintVector function here too.

2) Adding a few matrices
It’s time to add a few matrices to our demo application.
Add the following matrices just above the InputToExit() function call:

XMMATRIX mA(1.0f, 0.0f, 0.0f, 0.0f,
            0.0f, 2.0f, 1.0f, 0.0f,
            2.0f, 2.0f, 2.0f, 0.0f,
            1.0f, 1.0f, 1.0f, 1.0f);

XMMATRIX mB(1.0f, 0.0f, 0.0f, 0.0f,
            0.0f, 2.0f, 0.0f, 0.0f,
            0.0f, 0.0f, 2.0f, 0.0f,
            2.0f, 2.0f, 2.0f, 1.0f);

XMMATRIX mIdent = XMMatrixIdentity();
XMMATRIX ATranpose = XMMatrixTranspose(mB);

Here we are defining a few matrices using DirectX Math. First we have a random general matrix A, and another matrix B.

Then we multiply them to create a new matrix. Notice, as with XMVECTOR, we can use special functions to get the identity matrix and the transpose. There are many other like getting the determinant and the inverse matrix as well.

3) Printing the results
The last thing we want to do with these matrices is to print the results. Add the following lines of code below your matrix definitions:

cout << endl << “mA = ” << endl;

cout << endl << “AxB = ” << endl;

cout << endl << “mIdent = ” << endl;

cout << endl << “ATranpose = ” << endl;

The code should look something like this:

4) Build and Run 
Build the demo application and run it to see the output in your console window. Smile


Linear Transformations
This last short section of the primer will show you how you can do linear transformations using the DirectX Math APIs. It’s pretty simple. The APIs got functions to do translations, scaling and rotations using the following functions:


There are others that enables you to rotate around an axis as well as scaling from vectors and so on but these are the basic functions.

Let’s take a look at this in action!

1) Linear transformations matrices
Let’s first define the matrices we need to transform an object in a functional game world. Add these lines at the end, just before the InputToExit() function:
XMMATRIX trans = XMMatrixTranslation(10.0f, 5.0f, 1.0f);
XMMATRIX scale = XMMatrixScaling(2.0f, 2.0f, 2.0f);
XMMATRIX rotX = XMMatrixRotationX(45.0f);

2) Combining the matrices
Next, we will create a new matrix that is the result of the above matrices multiplied. Add this line below the above:
XMMATRIX world = rotX * scale * trans;

3) Printing the result
Last, we want to print the results. Add these lines below the above:

cout << endl << “world = ” << endl;

The code should look like this:

4) Build and Run
Build the application and run it to see the results.


And that’s it for the Win32 Console application!

Source code on GitHub
The source of the entire Win32 Console demo application can be found here.

Building a Universal Windows Application that uses the DirectX Math APIs

For this last section, we will take a quick look at how to create a more complex and real world application than what we did previously. This app will be using the Windows 10 SDK to create a Universal Windows Application that runs on all Windows 10 devices, and can be distributed on the Windows Store.

We will start by defining the User Interface, and then implement the code-behind using C++ and the Windows SDK.


What will this app do?

We will use Ink, a Windows 10 feature that enable us to easily draw on a canvas to draw two vectors. Using the DirectX Math library, we will add these two vectors together, draw it and also calculate the angle between them.


The app will work like this. First we draw one line and create a vector V1 between its endpoints. Next we draw another line, vector V2 and then once both lines are drawn, we create V3, and also calculate the angle between V2 and V3.

Creating the project

1) In Visual Studio, create a new project (either in a new solution, or in the same one as where the Win32 app for this tutorial is) from the File->New->Project menu:


2) Select the Blank App (Universal Windows) template from the Visual C++ –> Windows –> Universal section:

Give it a name, like Tutorial1UWP or Vector Drawer 2000 Deluxe or anything you want and press OK.

This will generate a project with a project tree that looks like this:image

This is the standard UWP solution containing everything Windows 10 needs to build and deploy a basic do-nothing app.

3) Build and Run
Press the green play button using the Local Machine selection to deploy and run this app on your local Windows 10 machine.


The app will now build and deploy to your machine, and once ready, launch. So far this is just an empty window with nothing to do.


Creating the UI in XAML

XAML is a language that looks a bit like XML or HTML that’s using tags to define a user interface. It is a very powerful design language, and  you can use JavaScript, C# or C++ to implement the logic behind it.

1) Locating our entry point and design choices
The app template comes with a file called MainPage.xaml with a code-behind file called MainPage.xaml.cpp and its header-file:


Whenever we need to modify the UI, we can open MainPage.xaml. If we need to write code to implement the logic behind the design, we open the MainPage.xaml.cpp/h file.

Open MainPage.xaml to see it in a design/code view:

The top part of the view is how our design looks from an editor perspective. You can use this as you UI Editor. You can change the look on various devices by changing the resolution using the top-left dropdown menu above the UI Editor.

Below the editor, you can see the XAML code for our design. It’s currently only having an empty Grid element in it. A Grid is used to create a grid-like user interface where you can create cells and rows that contain various controls.

Our design will have two rows. One top row that spans across the width of the device will have our app title in it, and then another row that will take up most of the screen space that will contain our ink canvas, as well as a few other controllers to display some text.


The Violet part on top is row 1, and then the big gradient dark area is our ink canvas.

Our Vector Editor will contain three lines and four text blocks that will be added using XAML, as well as an inking canvas.

The three lines will be used to display our vectors, and the four text blocks will be used to show what line is visible (V1, V2 and V3) and a last one for the angle calculation in our lower right corner.

For simplicity, I chose to add these directly in the XAML instead of programmatically adding them. This is possible and would make the UI more clean for this case, but harder to follow. Also, it is good to separate as much UI and logic as possible.

2) Writing our UI Code

To replicate my UI, replace your entire Grid-section with this code:

<Grid Background=”BlueViolet”>
        <RowDefinition Height=”Auto”/>
        <RowDefinition Height=”*”/>
    <StackPanel x:Name=”HeaderPanel” Orientation=”Horizontal” Grid.Row=”0″>
        <TextBlock x:Name=”Header”
                Text=”Tutorial 1 – DirectX Math”
                Style=”{ThemeResource HeaderTextBlockStyle}”
                Margin=”10,0,0,0″ Foreground=”White” />
    <Grid Grid.Row=”1″>
            <LinearGradientBrush StartPoint=”0,0″ EndPoint=”1,1″>
                <GradientStop Color=”#303030″ Offset=”0″/>
                <GradientStop Color=”#101010″ Offset=”0.64″/>
                <GradientStop Color=”#101010″ Offset=”0.75″/>
                <GradientStop Color=”#303030″ Offset=”1″/>
        <Path x:Name=”anglePath” 
                Data=”M 100,200 C 100,200 125,220 150,200″ Visibility=”Collapsed” />
        <Line Name=”v1Line”
            X1=”-1″ Y1=”-1″
            X2=”-1″ Y2=”-1″
            StrokeThickness=”4″ Visibility=”Collapsed” />

        <Line Name=”v2Line”
            X1=”-1″ Y1=”-1″
            X2=”-1″ Y2=”-1″
            StrokeThickness=”4″ Visibility=”Collapsed” />

        <Line Name=”v3Line”
            X1=”-1″ Y1=”-1″
            X2=”-1″ Y2=”-1″
            StrokeThickness=”4″ Visibility=”Collapsed” />
        <TextBlock x:Name=”v1Text” FontSize=”18″ Foreground=”White” Text=”V1″ Visibility=”Collapsed”/>
        <TextBlock x:Name=”v2Text” FontSize=”18″ Foreground=”White” Text=”V2″ Visibility=”Collapsed”/>
        <TextBlock x:Name=”v3Text” FontSize=”18″ Text=”V3″ Visibility=”Collapsed” Foreground=”BlueViolet” FontStyle=”Oblique”/>

        <TextBlock x:Name=”CalculationsText” FontSize=”25″ Text=”A” Visibility=”Visible” Foreground=”BlueViolet” HorizontalAlignment=”Right” VerticalAlignment=”Bottom” Margin=”0 0 10 10″/>

        <InkCanvas x:Name=”inkCanvas” />

What we are doing here is that we first define our grid layout to contain two rows. The first row will automatically get its height from the content we put in it, and then the last row will fill the rest of our screen space.

Then we are creating a StackPanel that will host our title. A StackPanel is simply a control that stacks things either horizontally or vertically. For now, we only add a TextBlock in it to display a title. We also specify that the StackPanel will be placed in row 0 of our grid, the first row.

Next we create another Grid that will only have one big cell that hosts everything our editor needs. We set the background to a gradient color, and add the different controls we need. Most of these controls are hidden by setting the Visibility property of the various controls to Collapsed (we will change this flag in our logic).


Developing the code-behind logic

We currently have a UI with a set of controls just idling. Let’s bring everything to life using DirectX Math, the Windows SDK and C++!

1) Defining our header file
Let’s start by defining what we need in our header file. Open MainPage.xaml.h to see it in Visual Studio.

First of all, we need to add an include to DirectXMath.h, this is done just as we did earlier in our Win32 example.

Add this line below the MainPage.g.h include line:
#include <DirectXMath.h>

Next, add a using statement to the DirectX namespace:
using namespace DirectX;

Below the constructor, add a new private label with the following members and functions:

    XMFLOAT2 v1Start;
    XMFLOAT2 v1End;
    XMFLOAT2 v1;

    XMFLOAT2 v2Start;
    XMFLOAT2 v2End;
    XMFLOAT2 v2;

    XMFLOAT2 v3;

    int drawingLineId = 0;

    void OnStrokeStarted(Windows::UI::Input::Inking::InkStrokeInput ^sender, Windows::UI::Core::PointerEventArgs ^args);
    void OnStrokeEnded(Windows::UI::Input::Inking::InkStrokeInput ^sender, Windows::UI::Core::PointerEventArgs ^args);
    void DrawResultVector();
    void ResetDrawingCanvas();

We will soon take a closer look at each of these but at a high level, we use XMFLOAT2 to store a 2d coordinate for the start and end points of our vectors, as well as one for the vector itself.

We use drawingLineId to identify what line were currently drawing (V1, V2). Then we have two event handlers for OnStrokeStarted and OnStrokeEnded.

Lastly, we have two functions, DrawResultVector() for drawing V3 when we have V1 and V2, and ResetDrawingCanvas() for clearing the canvas if we start drawing a new line when V1 and V2 is defines.

2) Implementing our functions

The last thing we need to do is to implement our functions. This is done in the MainPage.xaml.cpp file, so go ahead and open this now.

I will give you the code function by function, and then in the end, we will compile and run. It’s important that you change your namespace to what you are having in your own application. I’m using PracticalDirectX12_Tutorial1UWP.

First of all, add a using statement for DirectX below the others:
using namespace DirectX;

3) The Constructor

Replace your constructor code with this:


    // Accept the following input
    inkCanvas->InkPresenter->InputDeviceTypes =
        Windows::UI::Core::CoreInputDeviceTypes::Mouse |
        Windows::UI::Core::CoreInputDeviceTypes::Pen |

    // Set default ink color
    auto dA = inkCanvas->InkPresenter->CopyDefaultDrawingAttributes();
    dA->Color = Windows::UI::Colors::BlueViolet;

    // Events
    inkCanvas->InkPresenter->StrokeInput->StrokeStarted += ref new Windows::Foundation::TypedEventHandler<Windows::UI::Input::Inking::InkStrokeInput ^, Windows::UI::Core::PointerEventArgs ^>(this, &PracticalDirectX12_Tutorial1UWP::MainPage::OnStrokeStarted);
    inkCanvas->InkPresenter->StrokeInput->StrokeEnded += ref new Windows::Foundation::TypedEventHandler<Windows::UI::Input::Inking::InkStrokeInput ^, Windows::UI::Core::PointerEventArgs ^>(this, &PracticalDirectX12_Tutorial1UWP::MainPage::OnStrokeEnded);

What we do here is to first define what input our canvas will take. We wish to have mouse support (draw using right mouse button), pen and touch (if you have a touch screen).

Then we set the color of our ink to BlueViolet. We get the drawing attributes of our ink canvas by calling CopyDefaultDrawingAttributes(), make our changes and then store them using UpdateDefaultDrawingAttributes(…).

The last thing we do is to add two lines, one for each of our events. The first is for StrokeStarted and the latter is for StrokeEnded.

4) Event: OnStrokeStarted

This is where it all starts. When our ink canvas detects that we are starting to draw something, we immediately store the point our line is starting from.

Add this code to the bottom of the file:

void PracticalDirectX12_Tutorial1UWP::MainPage::OnStrokeStarted(Windows::UI::Input::Inking::InkStrokeInput ^sender, Windows::UI::Core::PointerEventArgs ^args)

    if(drawingLineId == 0)
        v1Start = XMFLOAT2(args->CurrentPoint->Position.X, args->CurrentPoint->Position.Y);
    else if(drawingLineId == 1)
        v2Start = XMFLOAT2(args->CurrentPoint->Position.X, args->CurrentPoint->Position.Y);


We use drawingLineId to check if we are going to draw line V1 or line V2. We also call ResetDrawingCanvas to check if both our lines has been drawn, and if so, clears everything to make it ready for a new operation.

5) Event: OnStrokeEnded
This is where our line ends. It’s a dirty implementation but for simplicity I leave it like this before doing any refactoring.

Copy this code and paste it in the bottom of the file:

void PracticalDirectX12_Tutorial1UWP::MainPage::OnStrokeEnded(Windows::UI::Input::Inking::InkStrokeInput ^sender, Windows::UI::Core::PointerEventArgs ^args)
    if (drawingLineId == 0){
        v1End = XMFLOAT2(args->CurrentPoint->Position.X, args->CurrentPoint->Position.Y);

        XMVECTOR v1S = XMLoadFloat2(&v1Start);
        XMVECTOR v1E = XMLoadFloat2(&v1End);
        XMVECTOR v1Diff = v1E – v1S;

        XMStoreFloat2(&v1, v1Diff);

        v1Line->X1 = v1Start.x;
        v1Line->Y1 = v1Start.y;

        v1Line->X2 = v1Start.x + v1.x;
        v1Line->Y2 = v1Start.y + v1.y;

        v1Line->Visibility = Windows::UI::Xaml::Visibility::Visible;

        v1Text->Margin = Windows::UI::Xaml::Thickness(v1Start.x + (v1.x / 2.0f) – 20.0f, v1Start.y + v1.y / 2.0f, 0.0f, 0.0f);
        v1Text->Visibility = Windows::UI::Xaml::Visibility::Visible;
    else if (drawingLineId == 1){
        v2End = XMFLOAT2(args->CurrentPoint->Position.X, args->CurrentPoint->Position.Y);

        XMVECTOR v2S = XMLoadFloat2(&v2Start);
        XMVECTOR v2E = XMLoadFloat2(&v2End);
        XMVECTOR v2Diff = v2E – v2S;

        XMStoreFloat2(&v2, v2Diff);

        v2Line->X1 = v1End.x;
        v2Line->Y1 = v1End.y;
        v2Line->X2 = v1End.x + v2.x;
        v2Line->Y2 = v1End.y + v2.y;

        v2Line->Visibility = Windows::UI::Xaml::Visibility::Visible;

        v2Text->Margin = Windows::UI::Xaml::Thickness(v1End.x + (v2.x / 2.0f) – 20.0f, v1End.y + v2.y / 2.0f, 0.0f, 0.0f);
        v2Text->Visibility = Windows::UI::Xaml::Visibility::Visible;



This code does the same thing based on if we are drawing V1 or V2. It captures the end point of the line and then calculates the vectors itself and stores it in our instance. It also updates the XAML Line controls so they are positioned correctly in our editor, as well as making them visible. It also sets the position of our TextBlocks to halfway down our vectors to identify the vector.

Once done, it will increase drawingLineId so the next line we draw will be V2, or if the current one is V2, we will draw V3 and finalize the vector operation.

if drawingLineId is 0, then we know that we are drawing our first line. If it is 1, we know we are drawing the 2nd line. If its above these, we will reset the Editor and start with a blank canvas.

6) DrawResultVector()

This function will create the last vector V3, and update the UI just like above for V1 and V2. In addition, it will calculate the angle between V2 and V3 using DirectX Math.

Copy and paste this code to the bottom of the file:

void PracticalDirectX12_Tutorial1UWP::MainPage::DrawResultVector()
    XMVECTOR xmV1 = XMLoadFloat2(&v1);
    XMVECTOR xmV2 = XMLoadFloat2(&v2);
    XMVECTOR xmV3 = xmV1 + xmV2;

    XMStoreFloat2(&v3, xmV3);

    v3Line->X1 = v1Start.x;
    v3Line->Y1 = v1Start.y;

    v3Line->X2 = v1Start.x + v3.x;
    v3Line->Y2 = v1Start.y + v3.y;
    v3Line->Visibility = Windows::UI::Xaml::Visibility::Visible;

    v3Text->Margin = Windows::UI::Xaml::Thickness(v1Start.x + (v3.x / 2.0f) – 20.0f, v1Start.y + v3.y / 2.0f, 0.0f, 0.0f);
    v3Text->Visibility = Windows::UI::Xaml::Visibility::Visible;

    XMVECTOR v2angle = XMLoadFloat2(&v2);
    XMVECTOR v3angle = XMLoadFloat2(&v3);
    XMVECTOR angleBetweenV2andV3 = XMVector2AngleBetweenVectors(v2angle, v3angle);
    float aRad = XMVectorGetX(angleBetweenV2andV3);
    float aDeg = XMConvertToDegrees(aRad);
    CalculationsText->Text = “Angle: ” + aDeg.ToString();

    anglePath->Visibility = Windows::UI::Xaml::Visibility::Visible;
    PathGeometry^ pg = ref new PathGeometry();
    PathFigure^ pf = ref new PathFigure();
    pf->StartPoint = Point(v2Line->X2 – v3.x / 10.0f, v2Line->Y2 – v3.y / 10.0f);
    ArcSegment^ as = ref new ArcSegment();
    as->Size = Size(10, 10);
    as->RotationAngle = 200.0f;

    // This needs to be set based on if our Triangle is drawn clockwise or counterclockwise
    as->SweepDirection = Windows::UI::Xaml::Media::SweepDirection::Counterclockwise;
    as->Point = Point(v3Line->X2 – v2.x / 10.0f, v3Line->Y2 – v2.y / 10.0f);


    anglePath->Data = pg;

As mentioned, we first load V1 and V2 to calculate V3, then we update the V3 line for our UI so we can see it.
Once we have all vectors V1, V2 and V3, we use DirectX Math to calculate the angle. We use XMVector2AngleBetweenVectors to do this, returning a new vector that contains the angle we are looking for (in radians) in any of its components (x, y and z – they are all the same, so doesn’t matter where we pick it from). Then we get the value using XMVectorGetX and convert it from Radians to Degrees using XMConvertToDegrees.

We also try to drawn an arc between V2 and V3. This currently works if you draw the vectors in counter clockwise direction. You can track the vectors to calculate this and set it correctly so it works both ways, but I leave that as an exercise for you! Smile

However, the arc is simply a path – a Bezier path that starts 10% down V2, and ends 10% down on V3. It takes a control point and a rotation angle for creating the curve amount. Feel free to play around with these values to learn how it works.

7) ResetDrawingCanvas()

The last part of this example is to implement the function that resets the drawing canvas.

Paste the following code at the bottom of the file:

void PracticalDirectX12_Tutorial1UWP::MainPage::ResetDrawingCanvas()
    if (drawingLineId > 1) {
        drawingLineId = 0;
        anglePath->Visibility = Windows::UI::Xaml::Visibility::Collapsed;
        v1Line->Visibility = Windows::UI::Xaml::Visibility::Collapsed;
        v2Line->Visibility = Windows::UI::Xaml::Visibility::Collapsed;
        v3Line->Visibility = Windows::UI::Xaml::Visibility::Collapsed;
        v1Text->Visibility = Windows::UI::Xaml::Visibility::Collapsed;
        v2Text->Visibility = Windows::UI::Xaml::Visibility::Collapsed;
        v3Text->Visibility = Windows::UI::Xaml::Visibility::Collapsed;

This is simply checking if both lines has been drawn, and if so, we clear all the existing strokes, as well as hides the lines and textblocks used to define our UI.

8) Build and Run!

Now hit that green play button again to build and deploy, and give your new app a try!



Ok, we covered a lot today but hopefully you now know the basics of the DirectX Math APIs and how to apply them in your own applications.

Moving forward, we will start working with DirectX and Direct3D, and using DirectX Math where needed.

Thanks for reading this! Smile


Download source from here:

Posted in DirectX12, UWP, Windows 10 | Leave a comment

[Video] DirectX 12: Resources Barriers and You

The DirectX 12 education channel on YouTube are frequently adding new videos to their learning collection. If you haven’t yet checked it out, and want to understand DirectX 12, you should take a look and explore this channel.

Anyhow, I decided to share this video as it’s an important topic for DirectX 12 development. In this video, Bennett walks you through the do’s and don’ts of D3D12 Resource Barrier usage. You’ll learn how to avoid common mistakes, and how to call the resource barrier APIs in a performance-optimal way.

Posted in DirectX, DirectX12 | Leave a comment