Imran's personal blog

March 22, 2017

Unity Mesh and Materials Notes

Filed under: Uncategorized — ipeerbhai @ 2:13 am

These are my notes on how to make a Mesh with Materials from pure C# code in Unity.
digested from:
http://catlikecoding.com/unity/tutorials/constructing-a-fractal/
http://catlikecoding.com/unity/tutorials/procedural-grid/
http://catlikecoding.com/unity/tutorials/rounded-cube/

Definitions:

Space — A coordinated system to define points.

UV — A 2d Space normalized based on the image’s size.  The definition is always UV = (0,0) is the origin and (1,1) is the top left in image space.  To convert an image point from pixels to UV, have U = (chosen pixel x position)/(Image Pixel Width) and V = (chosen pixel y position)/(Image Pixel Height).  A Loose idea is that the UV value is the “%” towards the top right of a picture.

Steps seem to be:

  1. Generate a GameObject( aka GO ).
  2. Generate Geometry.
    1. Geometry is stored in the mesh property of a MeshFilter component attached to a GameObject.
  3. Generate Texture.
    1. Texture is stored in the material property of a MeshRenderer component attached to a GameObject.

Step 1: Make a gameobject with the needed components:

Two different ways to do this:  method one:

Add this decoration to the class that will be your GO.

[RequireComponent(typeof(MeshFilter), typeof(MeshRenderer))

Method two — use the API to add at runtime.

		gameObject.AddComponent<MeshFilter>().mesh = mesh;
		gameObject.AddComponent<MeshRenderer>().material = material;

 

Step 2:  Generate Geometry.

Meshes are made of a handful of ideas.  The first is that meshes are made of triangles

 

 

 

February 26, 2017

My Unity/Daydream VR notes

Filed under: Uncategorized — ipeerbhai @ 4:13 am

Background

I’m over at the Seattle VR Hackaton sponsored by AT&T over the weekend, and decided to build a Daydream version of our DreamHUD hackathon project.  It quickly became apparent that this wasn’t going to work.  So, instead I decided to try and figure out how to implement Unity Daydream VR in any capacity at all.  I talked to 6 other developers here at the VR Hackathon, and none — I repeat none — got even “hello world” to boot on their Pixel/Daydreams using Unity and the Google VR SDK.  Almost all ended up with a black screen or compile failures.  I’m the only one who got something to build and deploy, but I’ve had no luck in getting and keeping a daydream app up and running with Unity.  I’m hoping my notes help others ( and myself ) get a working daydream VR app in the future.

Example Source Code:

I put the example source code on GitHub as a public project.

You can find the repo with bugs here:

https://github.com/ipeerbhai/DayDream101

then rebuilt it here with fewer bugs:

https://github.com/ipeerbhai/UnityAndroid101

Main issues

The main issues in getting Unity + Daydream working are:

  1. Install order seems to matter.  Installing things out of order results in “black screen of death” deployments.
  2. The controller emulator doesn’t work.  With some probing with adb, I was able to, a few weeks later, figure out how to get it to work.  Please see the troubleshooting section at the end of this blog post.
  3. The GvrEventSystem craps out during Play on Windows with the controller emulator.  As in the event pump either crashes the Unity editor, or the events just stop firing.
  4. Deploying to Cardboard results in a black screen.
  5. Poor documentation.  I thought MS was bad at MSDN docs — but they’re heaven compared to Google’s docs.  No example uses of any of their classes.  Even their own demos crash/blackscreen, so we can’t attach breakpoints and debug our way to figure out their APIs.

Notes

Installation:

Start with the Google instructions here:

https://developers.google.com/vr/unity/get-started

Here’s a few tricks I learned from failures along the way.

  • Make sure you have Android studio 2.2.3 or newer before you start install of Unity/JDKs.
  • For daydream controller emulator support in the Unity player, you must put ADB.exe in your system path variable.
  • open a cmd or shell window and run “adb devices” before starting Unity.  Unity’s player won’t be able to debug controller issues if you don’t.
  • Make sure you have Unity 5.6 or newer installed.
  • You must install Java SE JDK 1.8 along with Android SDK 24+ for daydream to work.  You can install the Android SDK from android studio, and the JDK from Unity.
    • Android Studio for SDK:  Click The Tools menu –> android –> SDK Manager
    • Unity for JDK: Edit –> Preferences… –>External Tools.  Click the little download button next the the JDK textbox.
  • Import the Google VR SDK unitypackage as the *very first thing* in the project!  Making a scene first, then importing the SDK will cause really hard to debug crashes.
  • On Windows, installing the Unity Tools for Visual Studio really makes script development in C# easier.
  • If you get a lot of controller object errors while running the player, stop the player and restart the scene.
  • Order operations really seems to matter.  Weird crashes and hard to debug issues seem to resolve if you change the order in which you install things or start unity.

After you’ve set the settings from the Google VR Guide, your main camera is now a VR stereo camera.  You can now create a scene.

Editor UI stuff:

On the upper right corner of the scene is a cube with some cones coming out of it.  That’s to control the scene camera in development mode.  Click the cube to enable right-click rotation, and click the cones to “look down” that axis towards the origin.

Questions and Answers:

How do I get a reference to a GameObject from a “Master” script attached to another gameobject?

Example Answer:  Create a GameObject in the Unity editor ( I created a cube, named it “Cube1” ).  In the master script’s Update function, I did this:

void Update () {
var Cube1 = GameObject.Find(“Cube1”);
}

How do I rotate this cube?

var cubeTransform = Cube1.transform;
cubeTransform.Rotate(Vector3.up, 10f * Time.deltaTime); // Time.detaTime is a unity provided static float that represents the time in seconds between calls to the update function.  The parameters seem to be a global-cooridinate axis ( Vector3.up is a unit vector [0,1,0] )and an arc-second.

What’s Unity’s cooridinate system?

Unity has X moving left/right, Z moving forward back, and Y moving up and down.  This is “Left hand rule” with the thumb as X, the index pointing up as Y, and the middle pointing forward as Z.

What’s the difference between a terrain and a Plane?
Digested from https://www.youtube.com/watch?v=Oc3odBj-jFA, and unity docs.

Terrains are defaulted to 500 x 500 meters in X and Z, with their “lower left”  set to 0, 0,  0.  You can deform them, and there’s a default material renderer with property settings the can mimic different types of ground ( like grass, sand, concrete. )  Planes are smaller, can’t deform, and don’t have a default texture.  Here’s a good shot of the inspector for terrain.

How do I make a “grassland with trees” texture onto the terrain?

  1. Import the Standard asset package to get some basic textures.  You can skip this step if you already have the texture you want.
    1. C:\Program Files\Unity 5.6.0b9\Editor\Standard Assets
  2. Select the PaintBrush tool in the terrain inspector.
  3. Click the “Edit Textures” button.
  4. Select “Add Texture”
    1. You can either click the “select” button and pick the asset in a flat list of textures.
    2. You can “drag and drop” the asset icon
    3. I picked, “GrassHillAlbedo.psd”
  5. Add the trees.
    1. Select the tree terrain “brush”.
    2. Click “Edit Trees…”
    3. Click add
      1. Pick one of the standard trees.
      2. Or, you can pick a tree you pre-modeled from the Unity tree modeler.

How do I make a sphere and Bounce it infinitely?
Digested from this video:
https://unity3d.com/learn/tutorials/projects/roll-ball-tutorial/moving-player?playlist=17141

  1. In the unity editor:
    1. Make a sphere of radius = 1 meter, position = 0, 0.5, 0.
    2. Attach a physics rigidbody component to it.
  2. In the MasterScript ( Or in the script for the object — I want to keep everything in one master script/gameobject, type this code in:
  3. // Update is called once per physics engine call, usually before update
    private void FixedUpdate()
    {
    // update all the forces we want to…
    var mySphere = GameObject.Find(“Sphere”);
    var theRigidBody = mySphere.GetComponent();
    if (mySphere.transform.position.y < 0.51)
    theRigidBody.AddForce(0, 300, 0, ForceMode.Acceleration);
    }

How do I enable Controller Support?
Digested from:
https://developers.google.com/vr/unity/controller-support

  1. Create an empty GameObject and name it Player.
  2. Set the position of the Player object to (0,1.6,0).
  3. Place the Main Camera underneath the Player object at (0,0,0).
  4. Place GvrControllerPointer underneath the Player object at (0,0,0).
  5. Set the position of the Main Camera to be (0,0,0).
  6. Add GvrViewerMain to the scene, located under GoogleVR/Prefabs.
  7. Add GvrControllerMain to the scene, located under GoogleVR/Prefabs/Controller.
  8. Add GvrEventSystem to the scene, located under GoogleVR/Prefabs/UI.

At the end of this, you’ll have a “laser pointer” on your “right hand” in your app.

How do I know what the DayDream controller is pointing at?

Digested from https://www.youtube.com/watch?v=l9OfmWnqR0M

There are two ways I’ve found to do this.

Method 1:  Use raycasting.

  1. Get the controller’s position be treating the controller as any gameobject.
    1. GameObject controllerPointer = GameObject.Find(“GvrControllerPointer”);
      Transform controllerTransform = controllerPointer.transform;
      Vector3 pos = controllerTransform.position;
    2. Can also get positoin in one line of code:
      Vector3 controllerPosition = GameObject.Find(“GvrControllerPointer”).transform.position;
  2. Get the controller’s orientation and create a forward pointing vector from the orientation quaternion.
    1. Vector3 fwd = GvrController.Orientation * Vector3.forward;
  3. Use phyiscs.Raycast to see what the controller is pointing at.
    1. RaycastHit pointingAtWhat;
    2. Physics.Raycast(pos, fwd, out pointingAtWhat);

 

Sample code: ( compiled and verified )

void Update ()
{
// find the bouncing sphere from inside this central game object.
var MySphere = GameObject.Find(“Sphere”); // Can skip getting if this script component is attached to the GameObject that will be the target.
var MySphereTransform = MySphere.transform;

// find the controller and get its position.
var controllerPointer = GameObject.Find(“GvrControllerPointer”);
var controllerTransform = controllerPointer.transform;

// use the controller orientation quaternion and get a forward pointing vector from it, then raycast.
Vector3 fwd = GvrController.Orientation * Vector3.forward;
RaycastHit pointingAtWhat;
if (Physics.Raycast(controllerTransform.position, fwd, out pointingAtWhat) )
{
var theTextGameObject = GameObject.Find(“txtMainData”);
UnityEngine.UI.Text theTextComponent = theTextGameObject.GetComponent<UnityEngine.UI.Text>();
theTextComponent.text = “hit ” + pointingAtWhat.collider.name;
}
}

Method 2: Use the Unity event system as modified by Google.

Step 1:  Add the GVREventSystem script to your scene, and add GvrPointerPhysicsRaycaster to your main camera.

Step 2:  Inherit from IGVRPointerHoverHandler on any gameobjects you want to receive a notification that the controller is pointing at like this:

public class myGameObject : MonoBehaviour, IGvrPointerHoverHandler {

public void OnGvrPointerHover(PointerEventData eventData) {

// myThing is now “hovered” by the controller pointer.  Do logic now.
// eventData contains a point of intersection between the ray and the object, along with a delta ( magic? )

}

// WARNING — this breaks in Unity 5.6.b11, but works through 5.6.b10.  Bug?

}

How do I rotate the camera around the world with the touchpad ?

Please note — don’t actually rotate the camera via the touchpad — it makes people sick fast.  This code is really only useful on the PC/control emulator to test input.

// Handling camera rotation with the controller touchpad needs these concepts:
// 1.The touchpad is an X/Y device.  (0,0) is top left.
//      X = 0 means “furthest left touch”.  X = 1 means “furthest right”.
// 2. We need a large “dead zone” around X = 0.5 to prevent jerky movement.
// 3. You cannot rotate the camera directly.  Google’s APIs reset any camera rotations.
//     Instead, put the camera in something, then rotate that something.

void Update()  // cut and paste, but modified, from working code.
{

float m_rotationSpeed = 10.0f; // normally a class member, put here for demo purposes.
float deadZone = 0.15f;
var player = GameObject.Find(“Player”); // the object containing the main camera.
if (GvrController.IsTouching)
{
if (GvrController.TouchPos.x < .5 – deadZone)
{
// Should be rotating left
player.transform.Rotate(0, -1 * Time.deltaTime * m_rotationSpeed, 0);
}
else if (GvrController.TouchPos.x > .5 + deadZone)
{
//Should be rotating right
player.transform.Rotate(0, 1 * Time.deltaTime * m_rotationSpeed, 0);
}
}
How do I hit the sphere with a “pool stick” on app button press?

In this scenario, we’re using the controller as a “pool stick” to hit the bouncing sphere and move it when the user pushes the app button.  Some learnings.

  1. AppButtonDown is only true for one frame — the frame when the user pushed the button.  This is a problem with a bouncing sphere, because the user may not have hit the bouncing ball when pushing the button.  Instead, we’ll use the boolean that’s always true, and add force as long as the button is down.
  2. GvrController does not expose position, so we have to use the GvrControllerPointer Prefab in Assets/GoogleVR/Prefabs/UI/GvrControllerPointer.prefab attached to a “Player” object.

private void FixedUpdate() // modified from working code.
{
// find the bouncing sphere from inside this central game object.
var mySphere = GameObject.Find(“Sphere”); // Can skip getting if this script component is attached to the GameObject that will be the target.
var sphereRigidBody = mySphere.GetComponent<Rigidbody>();
var MySphereTransform = mySphere.transform;

// find the controller and get its position.
var controllerPointer = GameObject.Find(“GvrControllerPointer”);
var controllerTransform = controllerPointer.transform;

// use the controller orientation quaternion and get a forward pointing vector from it, then raycast.
Vector3 fwd = GvrController.Orientation * Vector3.forward;
RaycastHit pointingAtWhat;
if (Physics.Raycast(controllerTransform.position, fwd, out pointingAtWhat))
{
if (GvrController.AppButton)
{
Vector3 forceToAdd = GvrController.Orientation * Vector3.forward * 100;
sphereRigidBody.AddForceAtPosition(forceToAdd, pointingAtWhat.point);
}
}

// update all the forces we want to…
if (mySphere.transform.position.y < 0.51)
sphereRigidBody.AddForce(0, 300, 0, ForceMode.Acceleration);
}

 

How do I display text in world view?

Digested from:
https://blogs.unity3d.com/2014/06/30/unity-4-6-new-ui-world-space-canvas/

Text in unity is rendered on a canvas.  This is a problem, because the GvrController class has a canvas.  So, if you create –> ui –> text, you’ll bind that text to your controller.  Instead, you have to make a canvas in world view, then adjust the scale from pixels to world units ( aka meters ).

  1. Create a new canvas using Create –> UI –>Canvas.  Give it a name.
  2. Select the Canvas In the Scene, then look at the Canvas Component in the Inspector.  Change the “Screen space overlay” to “World space”.
  3. The canvas is a gameobject — you can move it like you want.  But, don’t change the size property.  Instead, scale the canvas down to a reasonable size.
  4. With the canvas still selected, do create –> UI –> Text.  This will put text in the canvas.  Select the color and properties of the text in the inspector.

How do I start recording audio while the user has the app button down?

This turns out to require a new project and updating the Google SDK.  In the old SDK, Microphone permissions couldn’t be acquired, but now existing sample code works.

  1. Add an audio source to the component that is going to record.
  2. Decorate the GameObject source code for the recording component like this:
    1. [RequireComponent(typeof(AudioSource))]
  3. Add a private AudioSource to your GameObject derived class:
    1. private AudioSource microphoneAudioSource = null;
  4. Check for AppButtonDown in your Update function:
    1. if (GvrController.AppButtonDown) { // statements }
  5. Create a ringbuffer and call Microphone.Start like this:
    1. microphoneAudioSource.clip = Microphone.Start(null, true, 10, 16000);
      microphoneAudioSource.loop = true;
  6. Finish recording on AppButtonUp like so:
    1. if (GvrController.AppButtonUp) {
      int recordingPosition = Microphone.GetPosition(null); // do before calling end!
      Microphone.End(null);
      }
  7. AudioSource.clip will now contain the AudioClip.

 

Troubleshooting:

Problem: controller emulator won’t connect in the player.
Solution: Controller emulator device must be the first device that lists in adb devices.  This is a problem, in that some services on the host take port 5555 on up, and adb will see those sometimes.  Try running adb kill-server, then adb devices with your emulator phone attached.

Problem: Can’t install apk packages built on different PCs from same source code.
Solution: must uninstall the previous packages first.  You can use the android package manager (pm) to find the previously installed package, then run the uninstall command like so:
adb shell pm list packages
adb uninstall <com.xxx.yyy> (aka your old package)

Problem: Can’t get Mic input.
Solution: reinstall the GVR assets and build for non-VR Android target, run without VR, then re-enable Android VR target.  This seems to be caused by a bug in permission management in VR vs stock android.  Once your app has the perms, it keeps them.

 

December 12, 2016

Town revitalization

Filed under: Uncategorized — ipeerbhai @ 8:11 pm

I’ve been thinking how economically disadvantaged cities and small communities can revitalize their towns.  This begs the question — what makes a good town?

Do good schools make a good town, or is it backwards — does a good town make good schools?  Do good jobs make a good town, or does a good town make good jobs?

Historically, good towns have sprung up around two forces: natural transportation interfaces and global export capacity.  Natural transportation interfaces are ways to ship things by boat — Ports and rivers.  Global export capacity is just that — making a product that can be sold globally ( computer software, oil, etc… ).

So, imagine that you’re a suburb or small town someplace, whose chief export is your people.   Your town used to make something sold wide and far — but that product’s factory closed or moved away.  Now, the town has surplus productive population for the local demand pool.  What use is a town full of genius-level people that don’t have a way to package their genius into products that can be shipped anyplace?

And, I do believe that most people are Genius level.  To prove it: https://en.wikipedia.org/wiki/Flynn_effect — Average IQ ( that is mean IQ ) has increased 60 points since 1930.  The average IQ, using today’s scale, of people in 1930 would be 70.  The same scale would have “normal” people today at 130.  70 was considered “mentally retarded”, and 130 is considered borderline genius!  How did the average change so much?  The answer is that society asked more of people, and so they rose to the occasion.  This creates a catch-22:  if you don’t ask more of people, they don’t improve.  But, if you ask too much — test too often/finely, then they fail to improve.

So, to revitalize a failing town, you need to create a seed where people can ask more of themselves, but on their own.  Here’s my hypothesis for a recipe how that a government can implement:

  1. Establish a makerspace and course offerings at different levels of abstraction.  It’s not difficulty that’s hard, it’s abstraction.  Driving is very hard — we barely can teach robots to do it now.  But, almost everyone can learn it — it’s very concrete.  Conversely, calculus is often hard to learn, even though it’s straightforward ( in fact, an equation solver is one of the first AI proofs that all calculus can be solved with only a handful of steps put into the correct sequence: https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-034-artificial-intelligence-fall-2010/lecture-videos/lecture-2-reasoning-goal-trees-and-problem-solving/ )  The difference?  Abstraction.  Driving is concrete, with simple words and simple controls.  Calculus is abstract, with complex words and a search tree of possible steps.
  2. Establish a business incubator near, but not in, the Makerspace.  Incubators are essentially tables, meeting spaces, nap spaces, coffee, Internet, entertainment, and art.  You should be able  to get a quick nap at one.  You should be able to work securely at one, without fear that your laptop will get stolen.  You should be able to feel inspired.  It has to be quiet — no loud machines.  Think Starbucks — but you don’t feel bad if you’re there all day.  Subsidize coffee, drinks, and meals.  Allow businesses to carve out private spaces of the public space ( with rent payments, of course ). Allow individuals to work there, with token payments to keep out the homeless/shiftless.
  3. Establish a few different grant types.  Grant type 1 — the “search” grant.  This is to an individual, about $20k.  People apply to the grant, and show effort.  Success is not the way to award the grant — effort is — find metrics that are hard to game, that show effort.  A good example is a letter of reference from someone respected in the community.  You’ll need to give away about 10-20 of these grants a year.  A second type of grant is a seed grant.  40-60k, given for traction.  That is, either sales/sales growth, or user growth.  You’ll need to give away 3 of these a year.
  4. Market the heck out of all three.  Have schools do field trips to the makerspace.  Have meetups hosted in the incubator, for free.  Give away pamphlets about the grant program.  You’ll need to get about 400 people into the grant pipeline every year, award search grants to 10-20 of them a year, and award traction grants to 3.

This is all super cheap for a community, esp one that has building around and some tax base left.  We’re talking 100K for the makerspace up front, with maybe 60K/yr in operatiing expense.  The incubator is likely even less.  The grants are the most expensive thing, and we’re talking 400K in “search” grants, and 200k in traction grants.  The grants should be secured grants — secured by equity/equity options.

If you do this — less than a million dollars a year — over 5 years, you’ll develop new businesses and change the dynamic of your town.  This process is the seed around which small businesses will form.  Some of those will grow, perhaps quite large.  It’s the formula VC’s use, but are not fully cognizant of.  If your town has been in decline for a long time, then you may need to start without the grants. Those grants keep people afloat while they start a business — without them, you’ll get higher drop-out rates — but you’ll eventually find success.  And success means job growth!

November 21, 2016

Steps to Install TensorFlow with GPU on Windows

Filed under: Uncategorized — ipeerbhai @ 1:11 am

I normally use Encog and a self-written learning framework for when I do audio pipeline learning.  I’ve been tempted by CNTK and TensorFlow.  CNTK uses tools whose license is, sadly, too restrictive.  TensorFlow’s ecosystem is more in-line with what I need.

I’m a windows guy, and I can use TensorFlow(TS) via docker.  But, I want to use my GPU.  I have a CUDA compliant GPU on one of my machines along with Windows 10 and Visual Studio Community.  The official readme is designed for VS Pro, not community.  The key difference is that VS Community doesn’t officially support TensorFlow 32 bit with CUDA, only 64 bit.

Here’s the steps I’ve figure out so far:

Prerequisites.

You’ll need SWIG, CUDA, the Nvidea NN library for Cuda, Git, CMake, Python3.5 and numpy 1.11.  You can use Anaconda to satisfy the python/numpy requirement.  Install Anaconda, then conda install numpy in an elevated command prompt.  The rest, you’ll have to download installers and install.  Oh, and Visual Studio Community 2015.  I’ll assume a default install drive of C:  I’ve adapted the steps from the official Github here:

https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/cmake/README.md

You’ll want to read that first, as the changes are pretty minor.

Also, if you have python 2.7, then remove Python 2.7 from your path — it’ll interfere with CMAKE.

Steps

  1. launch a CMD window and setup the environment.
    1. Put all the above pre-reqs in your path environment variable, except for Visual Studio.
    2. run “C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin\amd64\vcvars64.bat”  ( This is changed from official docs )
    3. Put CMAKE in your path: set PATH=”%PATH%;C:\Program Files\CMake\bin” ( different than docs)
  2. Git pull tensorflow, change into the CMAKE dir, then build this CMAKE invocation line:
    1. cd /d “C:\temp\tensorflow\tensorflow\contrib\cmake\build”
    2. cmake .. -A x64 -DCMAKE_BUILD_TYPE=Release -DSWIG_EXECUTABLE=”C:/tools/swig/swig.exe” -DPYTHON_EXECUTABLE=C:/Users/%USERNAME%/AppData/Local/Continuum/Anaconda3/python.exe -DPYTHON_LIBRARIES=C:/Users/%USERNAME%/AppData/Local/Continuum/Anaconda3/libs/python35.lib -DPYTHON_INCLUDE_DIR=”C:/Program Files/Anaconda3/include” -DNUMPY_INCLUDE_DIR=”C:/Program Files/Anaconda3/lib/site-packages/numpy/core/include” -Dtensorflow_ENABLE_GPU=ON -DCUDNN_HOME=”C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v8.0″
  3. In theory, you can MSBuild the resulting vcproj from step 2.  I’ve found some build breaks, so I’ll update this post when I’ve figure it out.  Here’s the list so far:
    1. Severity Code Description Project File Line Suppression State
      Error C1083 Cannot open include file: ‘tensorflow/cc/ops/image_ops.h’: No such file or directory tf_label_image_example c:\temp\tensorflow\tensorflow\examples\label_image\main.cc 38
    2. Severity Code Description Project File Line Suppression State
      Error C1083 Cannot open include file: ‘tensorflow/cc/ops/array_ops.h’: No such file or directory tf_tutorials_example_trainer c:\temp\tensorflow\tensorflow\cc\ops\standard_ops.h 19
    3. Severity Code Description Project File Line Suppression State
      Error LNK1104 cannot open file ‘Debug\tf_core_gpu_kernels.lib’ grpc_tensorflow_server C:\temp\tensorflow\tensorflow\contrib\cmake\build\LINK 1
    4. Severity Code Description Project File Line Suppression State
      Error LNK1104 cannot open file ‘Debug\tf_core_gpu_kernels.lib’ pywrap_tensorflow C:\temp\tensorflow\tensorflow\contrib\cmake\build\LINK 1

 

Docker, Tensorflow, and scikit-learn on Windows

Filed under: Uncategorized — ipeerbhai @ 12:58 am

I wanted to play around with the docker version of tensorflow while I’m trying to fix build breaks on the gpu-accelerated Windows TS deployment I’m playing with.

There’s already a TS docker image.  I needed to get it and modify it.  Here’s the steps I did to do that.

Perquisites:

Have a Windows Machine running Docker, either via VirtualBox or Hyper-V.  You’ll need to know how to set a port forwarding rule to the default docker VM.

Steps:

  1. pull the image.   “docker pull  gcr.io/tensorflow/tensorflow”
  2. run the image “docker run -it -p 8888:8888 gcr.io/tensorflow/tensorflow”
  3. Exec a shell:
    1. “docker ps” ( find the container ID )
    2. “docker exec -it [IMAGE] bash
  4. install scikit-learn via pip in the image: pip install scikit-learn
  5. exit the bash shell
  6. Create a port forward rule from localhost:[PORT] to [default:8888].
  7. shutdown the jupyter notebook running by default in the TS image.
  8. docker commit [IMAGE] docker-local

You can now run the image you created with:

docker run -it -p 8888:8888 tensorflow-local

This gives you a jupyter notebook server with TS and scikit-learn as a docker machine.

Now, if only nvidea-docker would work on Windows.  A man can dream…

September 22, 2016

Voice Matters

Filed under: Uncategorized — ipeerbhai @ 6:56 pm

Experian just released a survey of 180 Echo owners about their Echo experience.  You can read the report here.  It showed some great findings — NPS of the echo was 19 — very high, but not extreme ( Google Chrome, for example, is around 35 ). Most impressive is that 35% of echo owners are shopping online, right now, with voice!  This means people like the echo and spend money with it.

Experian believes that Voice is now entering the, “Early adopter” phase of the hype cycle.  I’m surprised that it’s taken voice so long to get to this phase — but I’m an early adopter, having used the echo since their late betas.  I also have a VR setup, and I code for a living.

When voice dialers ( Siri, call my wife! ) became mainstream, they changed the world.  I use one every day I make a phone call.  This gave me hands free ability I use when I drive, and voice dialing is a mainstream use case.  I fully expect voice computing to go mainstream, and the market to grow here in leaps and bounds.

 

June 23, 2016

Learn OpenSCAD

Filed under: Uncategorized — ipeerbhai @ 10:21 pm

3d-tiara_preview_featured

Want to learn to make 3d objects like this cool Tiara from Adafruit?  Come to the OpenSCAD class at Metrix Create Space on Aug 4,2016.  We’ll go over the basics of drawing 3d objects by describing them in an open-source, free,  C-like language.

Register here:

http://www.metrixcreatespace.com/store/openscad-84

June 7, 2016

results of Genetic Algo vs NN

Filed under: Uncategorized — ipeerbhai @ 7:29 pm

For my voice AI project, I’ve been looking at genetic algorithms and neural nets.  I wrote a gate array learner and created a truth table of 4 input points and 2 output points.  I knew, ahead of time, that 2 XOR gates wired to inputs 1,2 and 3,4 respectively would perfectly fit the space.

I then wrote a genetic algorithm to solve the space problem, and counted how many times the algorithm tried to solve the problem before succeeding.

The algorithm tried between 811 and about 28,000  times before solving the space.  A Neural Net solved the same problem in between 43 and never times, even when given the same number of nodes.  A massively overfitted neural net of 2000 nodes in the hidden layer converged far faster than a low overfitted network of only 5 nodes in the hidden layer.

So, I’d probably call NN the winner — but only when massively overt-fitted.

May 26, 2016

Genetic Algorithms vs Gradient Descent

Filed under: Uncategorized — ipeerbhai @ 8:28 pm

I’ve been working with a BitArray pattern recognition system for sound processing.  I implemented a genetic algorithm with a single-point mutation ability and tested that algorithm against a data set of sounds ( me talking and a recording of violins playing.  The idea is to detect me talking over the violin noise, with the hope of eventually being able to tell speech from noise apart. )

It didn’t work at all.  I could create semi-optimal ( aka local minima ) solutions that could mostly guess me talking vs the violin, but not always.  There was a global solution — by pure chance I hit it a few times where the system worked correctly. ( about 1/10 of the time, I hit the correct global optima, 9/10 I hit a local optima ).

I wanted to see if I could evolve the local optima detectors to the global.  With a SNIP mutation, it didn’t work ( though I hypothesized it should work some of the time.  The global optima is a single bit, bit 47, being false in the encoded samples. )

I calculated from this the number of mutations to get to global optima from all suboptimal solutions.  I calculate at least 4 to around 7 serial snips, with add/deletion being far more valuable than transpose.

Cost tracking indicates that the global optima takes 318,000 if/then tests to achieve in a good case.  ( 500 sample points in the space — small data… )

I have no idea what gradient descent would take here.  But, I now know an appropriate DNN topography to guess correctly.   25 samples in input layer, 25 neurons + bias, and 1 output neuron should simulate my genetic selector.  Then I can tell what is more efficient.  I’m suspecting that poly-snip genetic would work, along with 25 neuron DNN.  I’ll have to impliment both and see which is more efficient DNN or Genetic?

May 9, 2016

What am I up to in 2016?

Filed under: Uncategorized — ipeerbhai @ 3:01 pm

For the past month, I’ve been working on a machine learning program, accidentally.

A year or so ago, I wrote a little app that uses cloud AI to do language translation.  It worked!  Only for me!  See, I grew up in the American Midwest.  I actually went to the University of Nebraska for a while.  I speak broadcast perfect — I could be a news anchorperson.  I also understand AI.  In machine translation, I understand that it’s just “transcoding” based on word frequency, Kenneth. This means I can have this kind of conversation with myself:

“How many dogs do you have?”/ “I have two dogs”.

So, because of these factors, I can use a translation AI without problem.  But I often interact with people who are older, have strong accents, and don’t really understand the processing time and optimal speech patterns for cloud machine translation.  They speak differently:

“How many dogs do you have?” / “two”.

Fragmented, fast, impatient, and ambiguous.  A machine system won’t handle this conversation well.  The accented, older human is now just frustrated with the thing.  They didn’t have enough clue from the system of what was going on, and it took too long for it to work.  They want “Effortless” translation, or they don’t believe/trust it at all.

So, I wanted to solve the problem of conversational translation along with a slew of other problems like contact search.  Thus, I stepped through the looking glass and decided it was time I learned AI development.  I went looking for frameworks, and discovered Encog, a C# neural network/ ML framework, and played around with it.  I discovered the amount of featurization and pre-processing for sound NN was higher/harder than I liked.  It could be done, but only with a metric tonne of labeled data — data I don’t have.

So, I looked at “small data” ideas.  One that interested me was the two-dimensional vector field learner that Numenta has.  I began a pure C# implementation ( I normally don’t code in C# because I hate UWP — but this kind of project uses old .NET APIS and no UWP).  And along the way, it hit me — this two dimensional learner was a neural network, and machine learning is really just pattern recognition.  The sparse maps are like labels — another way of saying, “Like these, not those”.  The two dimensional field could be represented by a vector of A elements, where A = M x N of the original field.

But there’s power in the representation that I hadn’t expected.  Turns our that viewing NN as a two dimensional vector and using masking leads to easier human understanding of what the heck is actually going on in the system.  And this leads to new ideas ( which I’m not ready to share yet, because they’re possibly insane ).

Now a days, I’m developing out the system because it’s intellectually engaging.  I’ve started from Ideas, seen how they work in existing frameworks, then moved and maybe improved those ideas into my own framework — because I believe “if you don’t build it, you don’t understand it”.  My framework is woefully incomplete.  It will always create a pattern based on the least significant bits.  It’s easy to fool, and doesn’t use enough horizontal data when building masks.  But it can do something amazing — it can tell apart two sounds with exactly one sample of each sound, and does so without a label.

And that’s not the most exciting part!  As I’ve been playing with these ideas, a new one has emerged about how to stack and  parallelize the detectors and make an atemporal representation of sound streams.  This seems to match what Noam Chomsky says about how human “Universal Grammar” must work. If this idea pans out ( and it’s maybe months of implementation time to find out ), then there’s a small chance that I’ll figure out some part of the language translation problem.

All that excitement is tempered by the fact that I have limited time.  Eventually, I’ll run out of money, and thus time, to do this research.  So the problems I must solve are:

  1. Can I build a framework that’s able to solve the problems I’m interested in?
  2. If not, can the pattern detectors solve problems others are interested in?
  3. Can I sell something from this system to fund my own time?

Anyways, that’s what I’ve been up to recently.

Next Page »

Create a free website or blog at WordPress.com.