Imran's personal blog

January 27, 2018

Bitcoin for Application Developers

Filed under: blockchain — ipeerbhai @ 10:43 pm


I recently wrote a proof-of-concept application on the Ethereum blockchain.  It was no piece of cake — API’s are changing so fast, that nothing worked as documented.  But, it was do-able, and it did work in a way that made sense.  Here’s the high-level concepts:

  1. setup a testnet
  2. Setup a test client on the testnet
  3. Setup a test miner(optional)
  4. Author a contract definition
  5. Compile the contract to opcodes
  6. Deploy the contract opcodes to an address
  7. Use a chain aware interaction library to bridge between traditional applications and the deployed smart contract.

These 7 steps are well documented in the Ethereum chain in something close to a developer guide.  But, a few hours of google-fu later, I can barely find the pieces of doing something like this for Bitcoin that’s not commercial in some way.  Mostly, it seems to be a signal/noise problem– everyone is advertising their solution, and no open, free way to do this guide.  So, I thought I’d take a stab at writing one.


I am teaching myself this as I go.  Literally.  I learn it, type it up here as my personal notes.  That’s why it’s on my personal blog.  Use any of this information at your own risk.  It may be wrong, or very likely — outdated.  There are no guarantees of any sort.  This guide is mostly stream-of-consciousness style writing.  There will be a lot of errors in both language and facts.  I repeat — use at your own risk.

Setup a single machine testnet ( aka regtest )

This used to be really hard, and people made testnets.  Now, it’s a flag in bitcoin core.  Download the bitcoin core, and start bitcoind with -regtest like this:

  • mkdir ~/testnet
  • cd ~/testnet
  • bitcoind -regtest -daemon -server -rpcuser=<x> -rpcpassword=<y>

Now, whenever you call bitcoin-cli with -regtest, it’ll use the testnet on your machine.  You can instantly mine blocks  by using generate.  You can save the username/password in bitcoin.conf.  To get 50 bitcoin in your single-machine test, you need to mine 101 blocks.  Like this:

  • bitcoin-cli -regtest generate 101

You can verify you got 50 bitcoins by typing:

  • bitcoin-cli -regtest getbalance

You can now use dumpwallet to get a full wallet list, and listaccounts to show which accounts have how much “btc”.

So , to weakly map this to ethereum:

bitcoind ~= GETH

bitcoin-cli ~= testRPC

Things are going to get a bit sticky here.  Bitcoin’s definition of a contract are very different than Etheruem’s.  Ethereum, having a full Turing state machine VM, can define a contract very broadly.  Bitcoin’s, however, is narrowly defined in Bitcoin script.  Script really seems involved with validating transactions and transferring coins in a linear way.  People had been using Ethereum to track complex data types and complex functions.  A typical design pattern may be to hold a refund in a contract, then wait for someone to request the refund.  Bitcoin does also have a data payload that you can write to.  There seem to be a few different methods people use to add data to the bitcoin blockchain:

  2. OP_0
  3. OP_1
  7. the {data} field of createrawmessage.

Right now, I’m not sure the difference, but I’ve figured out how to do it in nodeJS using the bitcoin-core rpc.  Please note — you’ll need your own RPC server to do this ( see the testnet above! )  This means that we can begin writing contracts that transfer satoshi or we can broadcast data on the chain to use as a stupidly expensive string store.

Expensive String Store

Here’s some sample node code to write hello world and burn satoshi.  FYI — a simple error here can burn a lot, and I have almost no error checking.  Don’t do this for real.

const Client = require(“bitcoin-core”); // an RPC aware client library.
const web3 = require(“web3”); // ethereum’s interface library, but has useful funcs.
// need to make sure I have a server set up first — this is a test server I ran to test this script.
const client = new Client({
network: “regtest”,
username: “foo”,
password: “bar”
function WriteHello() {
const askedChange = 0.00001000; // the min relay fee set for this node.
const asciiToWrite = “Hello world.”;
let transactionToUse = {};
let dataToWrite = {
data: web3.utils.toHex(asciiToWrite).slice(2)
.then(result => {
// update the needed structs and call through to createrawtransaction
transactionToUse[“txid”] = result[0].txid;
transactionToUse[“vout”] = result[0].vout;
dataToWrite[result[0].address] = result[0].amount – askedChange;
.createRawTransaction([transactionToUse], dataToWrite)
.then(rawTransactionHex => {
.then(signedTransactionHex => {
.then(sentTransactionHex => {
.catch(error => {
quickPrint(error, null);








November 12, 2017

Making Movies!

Filed under: Uncategorized — ipeerbhai @ 9:32 pm

Hi All,

I’ve been playing with blender for making movies, and I’ve found out a few things about movies, blender, codecs and the like that I feel like archiving.  This blog post is so I can re-learn how to make movies using my old camcorder, Windows Media Encoder 9, FFMPEG and blender as of 2017 ( even though some of these tools are much older ).

Funny thing — I actually worked on Windows Media Encoder 9 when I was an MS employee.  I find it amusing that it still works on Windows 10, but it’s a bit of a pain to get it to install, as Directx is now well above 8.1.

The software packages I used to make videos on Windows 10:

Blender 2.79  ( — to edit the videos.

Windows Media Encoder 9 ( ) — to do screen capture, capture audio from the PC microphone.

FFMPEG ( ) — to transcode camcorder codec to something more standard ( otherwise, sound can get out of sync with the video ).

Youtube-dlg: a good youtube downloader.

Installation Instructions.

Blender and FFMPEG are pretty straightforward.  Blender has an installer — run it.  FFMPEG is a zip file, and you just run the ffmpeg.exe in the zip file from the command line.  Put it someplace in your path, or don’t — it’s up to you.  If you don’t know how to use a command line, then this is not for you.

Windows media, on the other hand, is not an easy install on Windows 10.  The installer has multiple bugs, and you have to work around them.  It looks like MS gave up on Windows Media Encoder a long time ago…

To install Windows Media Encoder 9, find your download file and run this at the command line:

mkdir \temp\WMedia

WMEncoder64.exe /C /T:”\temp\WMedia”

This will extract WMEncoder64.msi to \temp\WMEdia.

cd \temp\WMedia


The installer will now run!  But, it will error out, and look like nothing happened.  Except that Windows Media Encoder 9 is now installed — it just doesn’t look like it.  If you scroll all your apps in the start bar, you’ll find a “Windows Media” folder.  In there, you’ll see “Windows Media Encoder 64 bit Edition” — run that if you want.

Use WMEncoder to record screen videos:

Start WMEncoder, and choose “custom session”.

In the custom session wizard, on the “Sources” tab, select “Screen Capture” as the video source.  I like to turn audio off and overdub that later — but it’s up to you.

Select the “Output” tab, click, “Encode to File” and type in a file name.  Use the “.wmv” file extension.  For example, “C:\temp\MyScreen.wmv”.

Select the “Compression” tab, and set the Destination to “File Archive”.  Select VBR100.  Be careful not to select “File Download(Computer Playback)”, as Windows Media Screen capture codec doesn’t support that destination!  The rest of the tabs are straight forward.  Fill them out and click “apply”.

When ready, select “Start encoding”.  Your screen is now being captured!

To stop encoding, switch to WMEncoder and click the stop button.

Use FFMPEG to transcode all videos to the same format:

Blender can handle different video formats, and can handle different video resolutions! But that’s a bit of work, and it requires integer framerates — the same frame rate.  Some devices don’t record to an integer frame rate, but rather, approximate it.  My camcorder uses 29.97 fps, and it ends up creating audio sync issues of 1 frame for every 600.

The command to have FFMPEG transcode to a typical mp4 at 25 frames per second is this:

ffmpeg.exe -i <input file> -r 25 -c:v libx264 -strict -2 -movflags faststart <output file>


ffmpeg.exe -i c:\temp\movies\00000.MTS -r 25 -c:v libx264 -strict -2 -movflags faststart c:\temp\movies\woof2.mp4

use FFMPEG to split the audio out

ffmpeg -i foo.mp4 -c copy audio.format audio.format means the file format, like sound.mp3.  Opus is a good free codec.

Use Blender to Edit the Movie:

Blender is a huge tool, and frankly, learning to use it is a long task.  Just learning the basics of video editing is maybe 400 minutes of youtube videos.  Here’s a set of quick 5-minute videos that will teach you enough to do a fade in, some jump cuts, do a little bit of “talking head” overlay, and export out a file ready for youtube.


  • Don’t fade in.  Some platforms use frame 1 as your still header.  So, in Internet videos, you don’t fade in.  That’s a “TV Commercial” standard, but we’re not TV…

Good luck!

June 21, 2017

Etsy won’t make you rich

Filed under: Uncategorized — ipeerbhai @ 5:32 pm

I saw this graph online from Earnest, a student loan re-financing company, and loved it.

They studied how much people make at different “gig economy” jobs.  See all those 0% at reasonable wages — like 0% of Uber drivers make $2,000/mo ( $24,000 a year ).  This shows the problem we need to solve as policy.  These companies minimize wages in ways that are unconscionable.  They violate the social contract routinely, and we citizens take advantage because we want cheap cab rides from a guy named Michael.  Almost all are equally bad — the median monthly income on  Etsy, for example, is $40.  1% make more than 2K.  You can’t live on $40/mo.    Yet I meet people all the time who think, “Well, I’ll make things and sell on Etsy to survive!”  No, you won’t.  You’ll laser engrave a few trinkets and be lucky to make $100 that month.

So, might as well scratch Etsy/Uber off the “survival money” list — you just won’t make it.  If you can afford to own property, AirBnb might work.  If you don’t mind hours, TaskRabbit might work.  Might — even then, you’re unlikely to make it.  America has been paying its service workers too little for too long, because we haven’t been taxing capital enough for a long time.  We could pass legislation to increase worker pay — and should.  We need to start taxing equipment like we do labor, so that we can build the infrastructure of our future.  People pay income tax, which is really a supply tax in disguise.  ( Most people worked at a corporation, then paid income taxes on the income they got from the corporation.  If you think about that, it’s really a supply tax. )  If the IRS can tax the labor part of supply, then why not tax the capital side?  Bill gates recommends this, and it makes sense in our changing economy.

May 15, 2017


Filed under: Uncategorized — ipeerbhai @ 5:56 pm

I run an out of date version of Windows on some of my laptops, with Windows update shut down.  I used to work in the security and anti-virus industries, on advanced threat detection and remediation.  I should be the last person on Earth to say, “You don’t need to patch a properly run Windows system newer than XP — ever.”  But so far, I can make that statement and stick to it, even with wannacry.

So, how do I stay ahead of ransomware?

  1. Shut down most inbound ports via the firewall, uninstall most dangerous services.  Wannacry uses ports 445, 137, 138, and 139.  I long ago stopped the Windows SMB server on my machines, as that’s always been a security hole.  I also long ago uninstalled the SMB service on my home PCs.  It’s great in an AD environment, but who uses AD at home?
  2. 30 day offsite cloud backups from backblaze.  Wonderful service!
    1. This is insurance.  If a ransomware attack did make it through, say via an unpatched 0-day, I can get my stuff back from my offsite backups.
  3. Encrypted personal files.
    1. I believe that all systems are inherently, “public”.  So, I use full disk encryption and password vaults with strong passwords.
    2. I think the LinkedIn hack is a great example — I viewed linkedin as a “low priority” password, and had a variant of that password with some mods against my bitbucket.  LinkedIn leaked a password in the big data breach they had, and hackers got into my BitBucket from a dictionary attack using that password.  Luckily, I didn’t have anything of value on BB, but I now use a unique password on every site, and I use 2 factor auth whenever possible.
  4. Git + Dropbox.  Any file I deem of value is also in a git repo somewhere.  I have 2 of them.  It’s really easy to make a dropbox folder, then add that entire folder to a git repo.  A virus will encrypt your dropbox — oh well.  Just delete the directory and “git pull”.
  5. No linked MSA — even on Windows 10.  This is a bit harder, but I worry–if someone hacks my Microsoft account — say via a shared password on LinkedIn — they can lock me out of my PCs just by changing the MSA password.  Thus, I don’t use an MSA-login on some of my Windows PCs.  This has the benefit of also killing a lot of MS spyware ( like Cortana ).
  6. No IE/Edge.   I use a security-focused browser.
  7. Ad blocker.  The one virus infection I had 10 years ago was from an ad served by a reputable website that exploited an Adobe Acrobat 0-day on my machine while I was in another room.  That forced me to analyze the virus and see what it did.
  8. Registry dumps.  Again, insurance for when I do eventually get hacked. With a registry dump, I can format the machine and import the .reg file.  This means a lot of my software installs remain “working” just by restoring backups.
  9. Run ESET antivirus.  Here’s why:

I know it’s only a matter of time before I’m hacked.  Some 0-day exists out there that I’m vulnerable to.  And, I choose to run Windows instead of Linux ( and let me tell you — as a data scientist, that’s such a pain.  TensorFlow, Python, etc are just such a pain to get working. ) — so it’s really a matter of time before either I’m hacked, Microsoft is Hacked, or one of my web services is hacked.  But so far, knock on wood, the mix of firewall settings, service shutdowns, encryption, backups, and web services, allows me to run “unpatched” on some of my systems (even on public networks )and remain uninfected.

By the way — this bothers me.  I have to take so many steps to keep ahead of bad guys, and I know I’ll lose one day.  It’s really just a matter of time.  I wish MS would do a few things:

  1. Unlink Microsoft Accounts(MSAs).  MS does a good job securing their network.  But linked MSAs are a recipe waiting for an exploit.  The bad guys don’t hack the PC — but hack the MSA system, and they’ve hacked, perhaps silently, all PCs using MSA login.
  2. Improve Windows Defender.  It’s just not very good.
  3. SKU-lock away AD policies.  “Windows home” shouldn’t allow group policy to disable command shell, ever — along with a host of anti-virus responses.
  4. Join/Unjoin Windows Update.  Allow old PCs that have turned off patching to rejoin whenever.  Technically, MS does this — but does it really badly.  If you fall behind enough, you can’t ever catch up, as WU will just stop working.
    1. Their answer is to force updates in Windows 10.  But this just pisses off users who don’t want to lose their computer for a day every 6 months or so as new, mostly non-security, patches are installed and existing preferences lost.  Who needs Cortana?  Why reset to Edge every six months?  The current approach is too heavy-handed and self-serving.

These 4 steps would greatly improve security for Windows.  Three of them are relatively easy, and MS could do them in a few months.  Some are hard, but can be done with acquisitions and smart policy.  I don’t know how many “blaster” or “wannacry” outbreaks are needed before MS does the right things to make security better for users without being self-serving.

May 9, 2017

Hypothesis — competing without barriers

Filed under: Uncategorized — ipeerbhai @ 5:14 pm

One of Porter’s 5 forces is the threat of new entrants.

The car rental business has been hemorrhaging cash because, well, anyone can lease you a car.  There’s the big guys — Hertz, Avis, Alamo, etc…  when you need to rent for a day or a week, maybe a month or two.  Then there’s the dealerships, who may short term lease you a car.  Then, there’s “Joe’s car rental”.  I don’t recall the name, but it was a real car rental place in my college town that actually rented to college students.  Old beaters — but the big guys don’t normally rent to people under 26 due to the insurance costs.

The problem car rental places have is this:  If they’re making good profits, then new entrants will follow with lower prices.  This is good — exactly what capitalism is supposed to want.  Remember, according to Adam Smith — profits are a sign of a problem.

But the question — how to compete with this?  You can do an “Uber” for car rentals, but I expect people want access to their own cars, and “Uber” is the best business possible here.

So, you try and compete on branding, efficiency of scale, or network effects.   Branding — “Pick Enterprise, we’ll pick you up.”  Network effects — no idea — maybe the “rent at location A, return at location B” — but network effects usually refer to demand side, not supply side.  Efficiency — “Buy lots of cars cheap, vertically integrate maintenance and fuel”.  There’s a new wrinkle in the efficiency game — efficiency of advertising.  The easiest business to win is to sell more to existing customers.  Cross-sell flights, hotels, and other travel services.  What else could be cross-sold?  Travel insurance?  How about car insurance?  “Dear Avis customer.  We need a lot of insurance on our cars.  We’ve partnered with Great Florida Insurance to create a bulk insurance package that we think could save you money.”  Gas?  But the real thing is to become a supplier of something.   Vehicle entertainment systems?  Burner pre-paid cell phones?

I would love to be a fly on the wall at Hertz’s executive level right now and see what they’re thinking.

April 20, 2017

Learn to be a Programmer!

Filed under: Uncategorized — ipeerbhai @ 5:39 pm

As an experienced Software Developer/Data Scientist/PM/Lead, I sometimes get the question, “How should I learn to program?”

Personally, I think that programmers should know a few different languages, but that comes from experience.  I saw this post on reddit, and I wanted to add a “+1” agreement to it.

In short — anyone who can write at an 8th grade level or better in any human language ( say English, or Chinese ) and can solve this equation:

3x + 1 = 7.  what does x equal?

has the intellectual ability to become a developer in about a year, and a good one in about three, if they follow this simple advice:

  1. Find a set of problems you’d like to solve.
  2. Pick a language/framework with lots of people who have worked on similar problems and have posted their solutions someplace.
  3. Set aside some time.
  4. Get some learning materials.
  5. Start coding.

With some focus, time, and self-forgiveness, you will get there.  The toughest part of programming is that you occasionally hit a “Valley of Despair” where nothing seems to work, and you don’t know why.  Having the emotional grit to get through the valley and the communications skills to find help ( even if its knowing how to work Google ) will get you there.

March 22, 2017

Unity Mesh and Materials Notes

Filed under: Uncategorized — ipeerbhai @ 2:13 am

These are my notes on how to make a Mesh with Materials from pure C# code in Unity.
digested from:


Space — A coordinated system to define points.

UV — A 2d Space normalized based on the image’s size.  The definition is always UV = (0,0) is the origin and (1,1) is the top left in image space.  To convert an image point from pixels to UV, have U = (chosen pixel x position)/(Image Pixel Width) and V = (chosen pixel y position)/(Image Pixel Height).  A Loose idea is that the UV value is the “%” towards the top right of a picture.

Steps seem to be:

  1. Generate a GameObject( aka GO ).
  2. Generate Geometry.
    1. Geometry is stored in the mesh property of a MeshFilter component attached to a GameObject.
  3. Generate Texture.
    1. Texture is stored in the material property of a MeshRenderer component attached to a GameObject.

Step 1: Make a gameobject with the needed components:

Two different ways to do this:  method one:

Add this decoration to the class that will be your GO.

[RequireComponent(typeof(MeshFilter), typeof(MeshRenderer))

Method two — use the API to add at runtime.

		gameObject.AddComponent<MeshFilter>().mesh = mesh;
		gameObject.AddComponent<MeshRenderer>().material = material;


Step 2:  Generate Geometry.

Meshes are made of a handful of ideas.  This means you have a handful of things to figure out to make a mesh:

  1. Where are the vertices in XYZ.
  2. What are the right normals to use per vertex.
    1. Which vertices should be duplicated to handle different faces using them.
  3. What is each vertex’s UV.
  4. What is each vertex’s curvature.
  5. What subfacets should your create to maximize your texture.

The first is that meshes are made of triangles.  You add the points in Unity’s world space, then link them in triangles by using the indices of each vertex in an array of Vector3.

NOTE.  You can generate submeshes in a mesh.  You specify the array of vertexes as normal, but instead of directly adding trianges, instead you set Mesh.SubmeshCount to your count of submeshes. Now, add triangles to each submesh instead of the main mesh.  Example code:
Mesh output = new Mesh();
output.subMeshCount = 2;
output.SetTriangles(m_TriangleLinesA, 1);
output.SetTriangles(m_TriangleLinesB, 2);

// if you’re not using submeshes, you can just add triangles by:
output.triangles = m_TriangleArray;

One part of generating the geometry is generation of the UV axis per vertex.  Here’s a good URL for UV generation on sphere’s:

For cubes, it’s easier.  There’s yet another method for cylinders.  It seems cube, cylinder, and sphere are the three approximations people use for generating their UV axis information.  You’ll use the vertex and normals to figure out the right axis position for UV.

Step 3: Generate texture.







February 26, 2017

My Unity/Daydream VR notes

Filed under: Uncategorized — ipeerbhai @ 4:13 am


I’m over at the Seattle VR Hackaton sponsored by AT&T over the weekend, and decided to build a Daydream version of our DreamHUD hackathon project.  It quickly became apparent that this wasn’t going to work.  So, instead I decided to try and figure out how to implement Unity Daydream VR in any capacity at all.  I talked to 6 other developers here at the VR Hackathon, and none — I repeat none — got even “hello world” to boot on their Pixel/Daydreams using Unity and the Google VR SDK.  Almost all ended up with a black screen or compile failures.  I’m the only one who got something to build and deploy, but I’ve had no luck in getting and keeping a daydream app up and running with Unity.  I’m hoping my notes help others ( and myself ) get a working daydream VR app in the future.

Example Source Code:

I put the example source code on GitHub as a public project.

You can find the repo with bugs here:

then rebuilt it here with fewer bugs:

Main issues

The main issues in getting Unity + Daydream working are:

  1. Install order seems to matter.  Installing things out of order results in “black screen of death” deployments.
  2. The controller emulator doesn’t work.  With some probing with adb, I was able to, a few weeks later, figure out how to get it to work.  Please see the troubleshooting section at the end of this blog post.
  3. The GvrEventSystem craps out during Play on Windows with the controller emulator.  As in the event pump either crashes the Unity editor, or the events just stop firing.
  4. Deploying to Cardboard results in a black screen.
  5. Poor documentation.  I thought MS was bad at MSDN docs — but they’re heaven compared to Google’s docs.  No example uses of any of their classes.  Even their own demos crash/blackscreen, so we can’t attach breakpoints and debug our way to figure out their APIs.



Start with the Google instructions here:

Here’s a few tricks I learned from failures along the way.

  • Make sure you have Android studio 2.2.3 or newer before you start install of Unity/JDKs.
  • For daydream controller emulator support in the Unity player, you must put ADB.exe in your system path variable.
  • open a cmd or shell window and run “adb devices” before starting Unity.  Unity’s player won’t be able to debug controller issues if you don’t.
  • Make sure you have Unity 5.6 or newer installed.
  • You must install Java SE JDK 1.8 along with Android SDK 24+ for daydream to work.  You can install the Android SDK from android studio, and the JDK from Unity.
    • Android Studio for SDK:  Click The Tools menu –> android –> SDK Manager
    • Unity for JDK: Edit –> Preferences… –>External Tools.  Click the little download button next the the JDK textbox.
  • Import the Google VR SDK unitypackage as the *very first thing* in the project!  Making a scene first, then importing the SDK will cause really hard to debug crashes.
  • On Windows, installing the Unity Tools for Visual Studio really makes script development in C# easier.
  • If you get a lot of controller object errors while running the player, stop the player and restart the scene.
  • Order operations really seems to matter.  Weird crashes and hard to debug issues seem to resolve if you change the order in which you install things or start unity.

After you’ve set the settings from the Google VR Guide, your main camera is now a VR stereo camera.  You can now create a scene.

Editor UI stuff:

On the upper right corner of the scene is a cube with some cones coming out of it.  That’s to control the scene camera in development mode.  Click the cube to enable right-click rotation, and click the cones to “look down” that axis towards the origin.

Questions and Answers:

How do I get a reference to a GameObject from a “Master” script attached to another gameobject?

Example Answer:  Create a GameObject in the Unity editor ( I created a cube, named it “Cube1” ).  In the master script’s Update function, I did this:

void Update () {
var Cube1 = GameObject.Find(“Cube1”);

How do I rotate this cube?

var cubeTransform = Cube1.transform;
cubeTransform.Rotate(Vector3.up, 10f * Time.deltaTime); // Time.detaTime is a unity provided static float that represents the time in seconds between calls to the update function.  The parameters seem to be a global-cooridinate axis ( Vector3.up is a unit vector [0,1,0] )and an arc-second.

What’s Unity’s cooridinate system?

Unity has X moving left/right, Z moving forward back, and Y moving up and down.  This is “Left hand rule” with the thumb as X, the index pointing up as Y, and the middle pointing forward as Z.

What’s the difference between a terrain and a Plane?
Digested from, and unity docs.

Terrains are defaulted to 500 x 500 meters in X and Z, with their “lower left”  set to 0, 0,  0.  You can deform them, and there’s a default material renderer with property settings the can mimic different types of ground ( like grass, sand, concrete. )  Planes are smaller, can’t deform, and don’t have a default texture.  Here’s a good shot of the inspector for terrain.

How do I make a “grassland with trees” texture onto the terrain?

  1. Import the Standard asset package to get some basic textures.  You can skip this step if you already have the texture you want.
    1. C:\Program Files\Unity 5.6.0b9\Editor\Standard Assets
  2. Select the PaintBrush tool in the terrain inspector.
  3. Click the “Edit Textures” button.
  4. Select “Add Texture”
    1. You can either click the “select” button and pick the asset in a flat list of textures.
    2. You can “drag and drop” the asset icon
    3. I picked, “GrassHillAlbedo.psd”
  5. Add the trees.
    1. Select the tree terrain “brush”.
    2. Click “Edit Trees…”
    3. Click add
      1. Pick one of the standard trees.
      2. Or, you can pick a tree you pre-modeled from the Unity tree modeler.

How do I make a sphere and Bounce it infinitely?
Digested from this video:

  1. In the unity editor:
    1. Make a sphere of radius = 1 meter, position = 0, 0.5, 0.
    2. Attach a physics rigidbody component to it.
  2. In the MasterScript ( Or in the script for the object — I want to keep everything in one master script/gameobject, type this code in:
  3. // Update is called once per physics engine call, usually before update
    private void FixedUpdate()
    // update all the forces we want to…
    var mySphere = GameObject.Find(“Sphere”);
    var theRigidBody = mySphere.GetComponent();
    if (mySphere.transform.position.y < 0.51)
    theRigidBody.AddForce(0, 300, 0, ForceMode.Acceleration);

How do I enable Controller Support?
Digested from:

  1. Create an empty GameObject and name it Player.
  2. Set the position of the Player object to (0,1.6,0).
  3. Place the Main Camera underneath the Player object at (0,0,0).
  4. Place GvrControllerPointer underneath the Player object at (0,0,0).
  5. Set the position of the Main Camera to be (0,0,0).
  6. Add GvrViewerMain to the scene, located under GoogleVR/Prefabs.
  7. Add GvrControllerMain to the scene, located under GoogleVR/Prefabs/Controller.
  8. Add GvrEventSystem to the scene, located under GoogleVR/Prefabs/UI.

At the end of this, you’ll have a “laser pointer” on your “right hand” in your app.

How do I know what the DayDream controller is pointing at?

Digested from

There are two ways I’ve found to do this.

Method 1:  Use raycasting.

  1. Get the controller’s position be treating the controller as any gameobject.
    1. GameObject controllerPointer = GameObject.Find(“GvrControllerPointer”);
      Transform controllerTransform = controllerPointer.transform;
      Vector3 pos = controllerTransform.position;
    2. Can also get positoin in one line of code:
      Vector3 controllerPosition = GameObject.Find(“GvrControllerPointer”).transform.position;
  2. Get the controller’s orientation and create a forward pointing vector from the orientation quaternion.
    1. Vector3 fwd = GvrController.Orientation * Vector3.forward;
  3. Use phyiscs.Raycast to see what the controller is pointing at.
    1. RaycastHit pointingAtWhat;
    2. Physics.Raycast(pos, fwd, out pointingAtWhat);


Sample code: ( compiled and verified )

void Update ()
// find the bouncing sphere from inside this central game object.
var MySphere = GameObject.Find(“Sphere”); // Can skip getting if this script component is attached to the GameObject that will be the target.
var MySphereTransform = MySphere.transform;

// find the controller and get its position.
var controllerPointer = GameObject.Find(“GvrControllerPointer”);
var controllerTransform = controllerPointer.transform;

// use the controller orientation quaternion and get a forward pointing vector from it, then raycast.
Vector3 fwd = GvrController.Orientation * Vector3.forward;
RaycastHit pointingAtWhat;
if (Physics.Raycast(controllerTransform.position, fwd, out pointingAtWhat) )
var theTextGameObject = GameObject.Find(“txtMainData”);
UnityEngine.UI.Text theTextComponent = theTextGameObject.GetComponent<UnityEngine.UI.Text>();
theTextComponent.text = “hit ” +;

Method 2: Use the Unity event system as modified by Google.

Step 1:  Add the GVREventSystem script to your scene, and add GvrPointerPhysicsRaycaster to your main camera.

Step 2:  Inherit from IGVRPointerHoverHandler on any gameobjects you want to receive a notification that the controller is pointing at like this:

public class myGameObject : MonoBehaviour, IGvrPointerHoverHandler {

public void OnGvrPointerHover(PointerEventData eventData) {

// myThing is now “hovered” by the controller pointer.  Do logic now.
// eventData contains a point of intersection between the ray and the object, along with a delta ( magic? )


// WARNING — this breaks in Unity 5.6.b11, but works through 5.6.b10.  Bug?


How do I rotate the camera around the world with the touchpad ?

Please note — don’t actually rotate the camera via the touchpad — it makes people sick fast.  This code is really only useful on the PC/control emulator to test input.

// Handling camera rotation with the controller touchpad needs these concepts:
// 1.The touchpad is an X/Y device.  (0,0) is top left.
//      X = 0 means “furthest left touch”.  X = 1 means “furthest right”.
// 2. We need a large “dead zone” around X = 0.5 to prevent jerky movement.
// 3. You cannot rotate the camera directly.  Google’s APIs reset any camera rotations.
//     Instead, put the camera in something, then rotate that something.

void Update()  // cut and paste, but modified, from working code.

float m_rotationSpeed = 10.0f; // normally a class member, put here for demo purposes.
float deadZone = 0.15f;
var player = GameObject.Find(“Player”); // the object containing the main camera.
if (GvrController.IsTouching)
if (GvrController.TouchPos.x < .5 – deadZone)
// Should be rotating left
player.transform.Rotate(0, -1 * Time.deltaTime * m_rotationSpeed, 0);
else if (GvrController.TouchPos.x > .5 + deadZone)
//Should be rotating right
player.transform.Rotate(0, 1 * Time.deltaTime * m_rotationSpeed, 0);
How do I hit the sphere with a “pool stick” on app button press?

In this scenario, we’re using the controller as a “pool stick” to hit the bouncing sphere and move it when the user pushes the app button.  Some learnings.

  1. AppButtonDown is only true for one frame — the frame when the user pushed the button.  This is a problem with a bouncing sphere, because the user may not have hit the bouncing ball when pushing the button.  Instead, we’ll use the boolean that’s always true, and add force as long as the button is down.
  2. GvrController does not expose position, so we have to use the GvrControllerPointer Prefab in Assets/GoogleVR/Prefabs/UI/GvrControllerPointer.prefab attached to a “Player” object.

private void FixedUpdate() // modified from working code.
// find the bouncing sphere from inside this central game object.
var mySphere = GameObject.Find(“Sphere”); // Can skip getting if this script component is attached to the GameObject that will be the target.
var sphereRigidBody = mySphere.GetComponent<Rigidbody>();
var MySphereTransform = mySphere.transform;

// find the controller and get its position.
var controllerPointer = GameObject.Find(“GvrControllerPointer”);
var controllerTransform = controllerPointer.transform;

// use the controller orientation quaternion and get a forward pointing vector from it, then raycast.
Vector3 fwd = GvrController.Orientation * Vector3.forward;
RaycastHit pointingAtWhat;
if (Physics.Raycast(controllerTransform.position, fwd, out pointingAtWhat))
if (GvrController.AppButton)
Vector3 forceToAdd = GvrController.Orientation * Vector3.forward * 100;
sphereRigidBody.AddForceAtPosition(forceToAdd, pointingAtWhat.point);

// update all the forces we want to…
if (mySphere.transform.position.y < 0.51)
sphereRigidBody.AddForce(0, 300, 0, ForceMode.Acceleration);


How do I display text in world view?

Digested from:

Text in unity is rendered on a canvas.  This is a problem, because the GvrController class has a canvas.  So, if you create –> ui –> text, you’ll bind that text to your controller.  Instead, you have to make a canvas in world view, then adjust the scale from pixels to world units ( aka meters ).

  1. Create a new canvas using Create –> UI –>Canvas.  Give it a name.
  2. Select the Canvas In the Scene, then look at the Canvas Component in the Inspector.  Change the “Screen space overlay” to “World space”.
  3. The canvas is a gameobject — you can move it like you want.  But, don’t change the size property.  Instead, scale the canvas down to a reasonable size.
  4. With the canvas still selected, do create –> UI –> Text.  This will put text in the canvas.  Select the color and properties of the text in the inspector.

How do I start recording audio while the user has the app button down?

This turns out to require a new project and updating the Google SDK.  In the old SDK, Microphone permissions couldn’t be acquired, but now existing sample code works.

  1. Add an audio source to the component that is going to record.
  2. Decorate the GameObject source code for the recording component like this:
    1. [RequireComponent(typeof(AudioSource))]
  3. Add a private AudioSource to your GameObject derived class:
    1. private AudioSource microphoneAudioSource = null;
  4. Check for AppButtonDown in your Update function:
    1. if (GvrController.AppButtonDown) { // statements }
  5. Create a ringbuffer and call Microphone.Start like this:
    1. microphoneAudioSource.clip = Microphone.Start(null, true, 10, 16000);
      microphoneAudioSource.loop = true;
  6. Finish recording on AppButtonUp like so:
    1. if (GvrController.AppButtonUp) {
      int recordingPosition = Microphone.GetPosition(null); // do before calling end!
  7. AudioSource.clip will now contain the AudioClip.



Problem: controller emulator won’t connect in the player.
Solution: Controller emulator device must be the first device that lists in adb devices.  This is a problem, in that some services on the host take port 5555 on up, and adb will see those sometimes.  Try running adb kill-server, then adb devices with your emulator phone attached.

Problem: Can’t install apk packages built on different PCs from same source code.
Solution: must uninstall the previous packages first.  You can use the android package manager (pm) to find the previously installed package, then run the uninstall command like so:
adb shell pm list packages
adb uninstall <> (aka your old package)

Problem: Can’t get Mic input.
Solution: reinstall the GVR assets and build for non-VR Android target, run without VR, then re-enable Android VR target.  This seems to be caused by a bug in permission management in VR vs stock android.  Once your app has the perms, it keeps them.


December 12, 2016

Town revitalization

Filed under: Uncategorized — ipeerbhai @ 8:11 pm

I’ve been thinking how economically disadvantaged cities and small communities can revitalize their towns.  This begs the question — what makes a good town?

Do good schools make a good town, or is it backwards — does a good town make good schools?  Do good jobs make a good town, or does a good town make good jobs?

Historically, good towns have sprung up around two forces: natural transportation interfaces and global export capacity.  Natural transportation interfaces are ways to ship things by boat — Ports and rivers.  Global export capacity is just that — making a product that can be sold globally ( computer software, oil, etc… ).

So, imagine that you’re a suburb or small town someplace, whose chief export is your people.   Your town used to make something sold wide and far — but that product’s factory closed or moved away.  Now, the town has surplus productive population for the local demand pool.  What use is a town full of genius-level people that don’t have a way to package their genius into products that can be shipped anyplace?

And, I do believe that most people are Genius level.  To prove it: — Average IQ ( that is mean IQ ) has increased 60 points since 1930.  The average IQ, using today’s scale, of people in 1930 would be 70.  The same scale would have “normal” people today at 130.  70 was considered “mentally retarded”, and 130 is considered borderline genius!  How did the average change so much?  The answer is that society asked more of people, and so they rose to the occasion.  This creates a catch-22:  if you don’t ask more of people, they don’t improve.  But, if you ask too much — test too often/finely, then they fail to improve.

So, to revitalize a failing town, you need to create a seed where people can ask more of themselves, but on their own.  Here’s my hypothesis for a recipe how that a government can implement:

  1. Establish a makerspace and course offerings at different levels of abstraction.  It’s not difficulty that’s hard, it’s abstraction.  Driving is very hard — we barely can teach robots to do it now.  But, almost everyone can learn it — it’s very concrete.  Conversely, calculus is often hard to learn, even though it’s straightforward ( in fact, an equation solver is one of the first AI proofs that all calculus can be solved with only a handful of steps put into the correct sequence: )  The difference?  Abstraction.  Driving is concrete, with simple words and simple controls.  Calculus is abstract, with complex words and a search tree of possible steps.
  2. Establish a business incubator near, but not in, the Makerspace.  Incubators are essentially tables, meeting spaces, nap spaces, coffee, Internet, entertainment, and art.  You should be able  to get a quick nap at one.  You should be able to work securely at one, without fear that your laptop will get stolen.  You should be able to feel inspired.  It has to be quiet — no loud machines.  Think Starbucks — but you don’t feel bad if you’re there all day.  Subsidize coffee, drinks, and meals.  Allow businesses to carve out private spaces of the public space ( with rent payments, of course ). Allow individuals to work there, with token payments to keep out the homeless/shiftless.
  3. Establish a few different grant types.  Grant type 1 — the “search” grant.  This is to an individual, about $20k.  People apply to the grant, and show effort.  Success is not the way to award the grant — effort is — find metrics that are hard to game, that show effort.  A good example is a letter of reference from someone respected in the community.  You’ll need to give away about 10-20 of these grants a year.  A second type of grant is a seed grant.  40-60k, given for traction.  That is, either sales/sales growth, or user growth.  You’ll need to give away 3 of these a year.
  4. Market the heck out of all three.  Have schools do field trips to the makerspace.  Have meetups hosted in the incubator, for free.  Give away pamphlets about the grant program.  You’ll need to get about 400 people into the grant pipeline every year, award search grants to 10-20 of them a year, and award traction grants to 3.

This is all super cheap for a community, esp one that has building around and some tax base left.  We’re talking 100K for the makerspace up front, with maybe 60K/yr in operatiing expense.  The incubator is likely even less.  The grants are the most expensive thing, and we’re talking 400K in “search” grants, and 200k in traction grants.  The grants should be secured grants — secured by equity/equity options.

If you do this — less than a million dollars a year — over 5 years, you’ll develop new businesses and change the dynamic of your town.  This process is the seed around which small businesses will form.  Some of those will grow, perhaps quite large.  It’s the formula VC’s use, but are not fully cognizant of.  If your town has been in decline for a long time, then you may need to start without the grants. Those grants keep people afloat while they start a business — without them, you’ll get higher drop-out rates — but you’ll eventually find success.  And success means job growth!

November 21, 2016

Steps to Install TensorFlow with GPU on Windows

Filed under: Uncategorized — ipeerbhai @ 1:11 am

I normally use Encog and a self-written learning framework for when I do audio pipeline learning.  I’ve been tempted by CNTK and TensorFlow.  CNTK uses tools whose license is, sadly, too restrictive.  TensorFlow’s ecosystem is more in-line with what I need.

I’m a windows guy, and I can use TensorFlow(TS) via docker.  But, I want to use my GPU.  I have a CUDA compliant GPU on one of my machines along with Windows 10 and Visual Studio Community.  The official readme is designed for VS Pro, not community.  The key difference is that VS Community doesn’t officially support TensorFlow 32 bit with CUDA, only 64 bit.

Here’s the steps I’ve figure out so far:


You’ll need SWIG, CUDA, the Nvidea NN library for Cuda, Git, CMake, Python3.5 and numpy 1.11.  You can use Anaconda to satisfy the python/numpy requirement.  Install Anaconda, then conda install numpy in an elevated command prompt.  The rest, you’ll have to download installers and install.  Oh, and Visual Studio Community 2015.  I’ll assume a default install drive of C:  I’ve adapted the steps from the official Github here:

You’ll want to read that first, as the changes are pretty minor.

Also, if you have python 2.7, then remove Python 2.7 from your path — it’ll interfere with CMAKE.


  1. launch a CMD window and setup the environment.
    1. Put all the above pre-reqs in your path environment variable, except for Visual Studio.
    2. run “C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin\amd64\vcvars64.bat”  ( This is changed from official docs )
    3. Put CMAKE in your path: set PATH=”%PATH%;C:\Program Files\CMake\bin” ( different than docs)
  2. Git pull tensorflow, change into the CMAKE dir, then build this CMAKE invocation line:
    1. cd /d “C:\temp\tensorflow\tensorflow\contrib\cmake\build”
    2. cmake .. -A x64 -DCMAKE_BUILD_TYPE=Release -DSWIG_EXECUTABLE=”C:/tools/swig/swig.exe” -DPYTHON_EXECUTABLE=C:/Users/%USERNAME%/AppData/Local/Continuum/Anaconda3/python.exe -DPYTHON_LIBRARIES=C:/Users/%USERNAME%/AppData/Local/Continuum/Anaconda3/libs/python35.lib -DPYTHON_INCLUDE_DIR=”C:/Program Files/Anaconda3/include” -DNUMPY_INCLUDE_DIR=”C:/Program Files/Anaconda3/lib/site-packages/numpy/core/include” -Dtensorflow_ENABLE_GPU=ON -DCUDNN_HOME=”C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v8.0″
  3. In theory, you can MSBuild the resulting vcproj from step 2.  I’ve found some build breaks, so I’ll update this post when I’ve figure it out.  Here’s the list so far:
    1. Severity Code Description Project File Line Suppression State
      Error C1083 Cannot open include file: ‘tensorflow/cc/ops/image_ops.h’: No such file or directory tf_label_image_example c:\temp\tensorflow\tensorflow\examples\label_image\ 38
    2. Severity Code Description Project File Line Suppression State
      Error C1083 Cannot open include file: ‘tensorflow/cc/ops/array_ops.h’: No such file or directory tf_tutorials_example_trainer c:\temp\tensorflow\tensorflow\cc\ops\standard_ops.h 19
    3. Severity Code Description Project File Line Suppression State
      Error LNK1104 cannot open file ‘Debug\tf_core_gpu_kernels.lib’ grpc_tensorflow_server C:\temp\tensorflow\tensorflow\contrib\cmake\build\LINK 1
    4. Severity Code Description Project File Line Suppression State
      Error LNK1104 cannot open file ‘Debug\tf_core_gpu_kernels.lib’ pywrap_tensorflow C:\temp\tensorflow\tensorflow\contrib\cmake\build\LINK 1


Next Page »

Blog at