Imran's personal blog

June 21, 2017

Etsy won’t make you rich

Filed under: Uncategorized — ipeerbhai @ 5:32 pm

I saw this graph online from Earnest, a student loan re-financing company, and loved it.

They studied how much people make at different “gig economy” jobs.  See all those 0% at reasonable wages — like 0% of Uber drivers make $2,000/mo ( $24,000 a year ).  This shows the problem we need to solve as policy.  These companies minimize wages in ways that are unconscionable.  They violate the social contract routinely, and we citizens take advantage because we want cheap cab rides from a guy named Michael.  Almost all are equally bad — the median monthly income on  Etsy, for example, is $40.  1% make more than 2K.  You can’t live on $40/mo.    Yet I meet people all the time who think, “Well, I’ll make things and sell on Etsy to survive!”  No, you won’t.  You’ll laser engrave a few trinkets and be lucky to make $100 that month.

So, might as well scratch Etsy/Uber off the “survival money” list — you just won’t make it.  If you can afford to own property, AirBnb might work.  If you don’t mind hours, TaskRabbit might work.  Might — even then, you’re unlikely to make it.  America has been paying its service workers too little for too long, because we haven’t been taxing capital enough for a long time.  We could pass legislation to increase worker pay — and should.  We need to start taxing equipment like we do labor, so that we can build the infrastructure of our future.  People pay income tax, which is really a supply tax in disguise.  ( Most people worked at a corporation, then paid income taxes on the income they got from the corporation.  If you think about that, it’s really a supply tax. )  If the IRS can tax the labor part of supply, then why not tax the capital side?  Bill gates recommends this, and it makes sense in our changing economy.

Advertisements

May 24, 2017

Why I’m running for City Council

Filed under: Uncategorized — ipeerbhai @ 8:23 am

Imran Peerbhai Picture

 

 

Hi,

I’m Imran Peerbhai.  and I am running for Kirkland City Council, Position 7.  But you knew that already!

So, why am I running?  I, like the other candidates running for position 7, believe the city is well-run.  We’re blessed by strong government and economic growth!  I would like to keep it that way.  When I was pondering about running for office, I ran through a list of hopes and fears.  To really understand these hopes and fears, I’d like to tell you a little about my background.

My Biography

Birth and Immigration to the U.S.

My family immigrated to the U.S in the 1970s. when I was a preschooler.  My dad ran a little music and record shop in Karachi, Pakistan.  During that time, many in Pakistan were radicalized due to the war in Afghanistan with the Soviet Union.  My dad sold mostly British and American music — which did not sit well with the extremists.  One day, masked men he thinks were local police, stopped by his music store and told my dad to stop selling music, or “something bad” would happen.  Due to the fear of persecution, he closed his shop. He quickly realized that we had to leave, or be killed in the wave of radicalization sweeping the land.  We were, you see, the wrong kind of people — the kind that listened to Bob Dylan and supported Women’s education.  I was about 4 months old when he kissed me goodbye and left for Chicago.  My dad had some skill as a machinist and was able to find work in 1970’s Chicago.  He eventually landed a blue-collar, non-union job operating C.N.C. machines at a tool and die shop.  Additionally, he drove a taxi cab during the evening so he could save to bring his wife and kids here.  It took a couple of years, and I was around 3 years old when my mom and I arrived, during the days of Jimmy Carter as President.

Elementary School through High School.

My dad was a machinist without a college education.  My mom had an eighth-grade education and was a stay-at-home housewife.  Layoffs were common, as America’s “Rust Belt” was forming.  We were dirt poor, and my dad was often unemployed.  My dad drove around for years in a Ford Escort with tape for tail-lights and a crushed trunk.  We didn’t have cable T.V., or more importantly, health insurance.  I never got braces — or anything else — I needed as a kid.  We were fortunate as there were no major illnesses in the family such as cancer or heart disease.  If those had happened, or if we had lived in urban Chicago instead of the suburbs, I’d likely have had a different outcome — badly educated, poor, and at the edge of homelessness.  Even now, some of my friends who grew up near me are in jail, profoundly underemployed, or deceased.  Other have succeeded and have had a broad range of careers, including pilots!  It’s not as bad as it seems, as one can grow from the challenges he/she faces.  I learned early on in life — good public school where our kids are safe to learn matter.  They lift people out of despair.  What could my mom have accomplished in life with a high school education, or a college one?  One thing that environment taught me about good public schools — they can only exist with good government and at the right population density.  Bad government, especially corrupt government, leads to bad outcomes.  Where there is too much population density, and the schools do not get enough money (in any country or system) to help the sheer number of children who need it most.  Too little population density, and there aren’t enough talented educators available for the schools.  So, though we were poor, uninsured, stressed, and dealing with displacement, I was able to learn enough to begin the climb out.  I took English as a second language classes in kindergarten through third grade, and by fourth grade, was my class’s most advanced reader!  Mostly because I loved comic books, and the library had them.  To this day, Archie comics are still my favorite, followed by The Incredible Hulk.  I don’t really speak any Urdu any more, but maybe can ask for a glass of water.   As an aside, my Dad insisted on assimilation.  He hated that extremism had become the norm in Pakistan, and proudly drove GM and Ford cars for as long as he could.  He insisted that we speak English fluently and regularly, even at home.  He believed that we immigrants have an extra responsibility to be good neighbors in a country gracious enough to adopt us.  Certainly, we couldn’t commit crimes or accept government assistance!  We would live, or die, honorably.  So, by about the fifth grade, I was indistinguishable from any other person born in America.  I ran track and cross-country, was a bit of a computer nerd, and held various retail and restaurant jobs.  One major difference — my dad’s frequent unemployment and poor health forced him to move to Florida, while I stayed in the Chicago metro area to work and pay for groceries for both my mom and myself (we were still dirt poor) after the age of fifteen.  My mom reunited with my dad after I finished high school.  One hard lesson I learned — wage theft — when employers force you to under-report worked hours — happens to those who can barely afford to eat.  The take away lesson from these years were:  America is an amazing, open country, but it has some problems enforcing the laws that help its poorest citizens thrive.

Exiting Poverty

After I graduated high school, I went to college.  Then it ended, and I had no money for a place to live, and no job.  That left me homeless for about a month, “couch surfing” with friends and acquaintances.  That’s when I moved to Seattle — because there was a couch I could crash on.  Having lived in America’s rustiest cities, I was suddenly in Nirvana.  Chicago didn’t really have computer jobs, nor did Iowa or small-town Nebraska, back in those days.  I had worked at a help desk in college, and was able to program in multiple languages and  write code  de.  My technical skills allowed me to get as a contractor at Microsoft.    I eventually became a full time Microsoft employee.  I worked my way up over the years to managing a small build — aka “devops” engineering services — team, and the salary got me out of poverty.  Health insurance meant I could get braces, and some needed surgery.  This taught me — lack of health care is a crime against humanity.  So many people live in quiet desperation, afraid of an accident, cancer, or other disease — not because they’ll die — but because they’ll go bankrupt and destroy their families in the process of trying to live.  Even in ordinary cases — people will suffer and be held back from becoming productive members of society.  The other thing I learned — diversity is hard in tech.  When I became a manager, there weren’t any women or African Americans on the team I inherited, nor had there been while I was there.  Going to school in the suburbs of Chicago had shown me real discrimination “up close and personal”, and I learned first-hand how silently and invisibly it comes into being, and how it harms everyone — both the person being discriminated against, and the person doing the discriminating.  So, when there were openings on my team, I sought out qualified African-Americans and women for roles.  I openly talked about how our group was better off with them, and how the lack of differing views created sub-optimal processes.  I was able to make my team not only more diverse, but make our processes better, through that strength.  We became the best team in Microsoft, and I won a “Hero” award — something I’m still proud of today.  I did other things at MS, but none so meaningful to me personally.  Oh, and I did go back and get my college degree.  I graduated from the University of Washington with a Bachelor of Arts in BA Economics, a certificate in Econometrics (aka Data Science), and am a member of the Omicron Delta Epsilon Honor’s society.  I have a wife and two young kids at home, and a Senior soon to be with us.  One child goes to elementary school in Kirkland, and the other is in daycare, also in Kirkland.  I run a small artificial intelligence company, where we’ve built a voice assistant for elder care.

Why I’m running.

So, back to the original question – why am I running?  I’m running because I believe in honest government, free from conflict of interest – even the appearance of conflict of interest.  I believe in law and order, and feel crimes should be punished objectively, and more attention paid to victims and their needs.  I believe that people who have lived in poverty are best equipped to understand and fight it.  I believe that Kirkland is a place where content of character matters more than the color of one’s skin.  I believe in measured growth, so that we don’t pave paradise and put up a parking lot.  I believe that we need great services for our seniors, and should focus more on their needs in the community.  Kirkland is growing and changing, and I believe I am the best bridge between the Kirkland of today and the Kirkland of tomorrow.  At a city level, I feel the city should allow residents with “strange curbs” to install curb cuts if wanted.  To make housing more affordable to build, I believe that the city should investigate closed wall inspections with photographic proof.  I feel the city process for handling dangerous trees should be improved, rather than aggressively fining homeowners for removing dangerous trees.  I feel the four hour parking zone on Market Street should be removed, to allow better access to the transit stops there and relieve pressure on our overcrowded park and rides.  I see that parents need a crosswalk on 84th St Ne and 139th, instead of jaywalking with their children in the mornings, for both the safety of the children and to improve traffic flow when school starts.  I believe that we need to focus more on both education and job growth for our citizens and children, and would support a business incubator in the area near the Municipal court building.  When I view the online videos of City Council meetings, I have observed that the public is mostly ignored.  I feel that this agenda can only be heard if I’m on the council.

Thanks,
Imran Peerbhai
Citizen running for Council.

May 15, 2017

wannacry

Filed under: Uncategorized — ipeerbhai @ 5:56 pm

I run an out of date version of Windows on some of my laptops, with Windows update shut down.  I used to work in the security and anti-virus industries, on advanced threat detection and remediation.  I should be the last person on Earth to say, “You don’t need to patch a properly run Windows system newer than XP — ever.”  But so far, I can make that statement and stick to it, even with wannacry.

So, how do I stay ahead of ransomware?

  1. Shut down most inbound ports via the firewall, uninstall most dangerous services.  Wannacry uses ports 445, 137, 138, and 139.  I long ago stopped the Windows SMB server on my machines, as that’s always been a security hole.  I also long ago uninstalled the SMB service on my home PCs.  It’s great in an AD environment, but who uses AD at home?
  2. 30 day offsite cloud backups from backblaze.  Wonderful service!
    1. This is insurance.  If a ransomware attack did make it through, say via an unpatched 0-day, I can get my stuff back from my offsite backups.
  3. Encrypted personal files.
    1. I believe that all systems are inherently, “public”.  So, I use full disk encryption and password vaults with strong passwords.
    2. I think the LinkedIn hack is a great example — I viewed linkedin as a “low priority” password, and had a variant of that password with some mods against my bitbucket.  LinkedIn leaked a password in the big data breach they had, and hackers got into my BitBucket from a dictionary attack using that password.  Luckily, I didn’t have anything of value on BB, but I now use a unique password on every site, and I use 2 factor auth whenever possible.
  4. Git + Dropbox.  Any file I deem of value is also in a git repo somewhere.  I have 2 of them.  It’s really easy to make a dropbox folder, then add that entire folder to a git repo.  A virus will encrypt your dropbox — oh well.  Just delete the directory and “git pull”.
  5. No linked MSA — even on Windows 10.  This is a bit harder, but I worry–if someone hacks my Microsoft account — say via a shared password on LinkedIn — they can lock me out of my PCs just by changing the MSA password.  Thus, I don’t use an MSA-login on some of my Windows PCs.  This has the benefit of also killing a lot of MS spyware ( like Cortana ).
  6. No IE/Edge.   I use a security-focused browser.
  7. Ad blocker.  The one virus infection I had 10 years ago was from an ad served by a reputable website that exploited an Adobe Acrobat 0-day on my machine while I was in another room.  That forced me to analyze the virus and see what it did.
  8. Registry dumps.  Again, insurance for when I do eventually get hacked. With a registry dump, I can format the machine and import the .reg file.  This means a lot of my software installs remain “working” just by restoring backups.
  9. Run ESET antivirus.  Here’s why: https://www.virusbulletin.com/testing/vb100/latest-rap-quadrant/

I know it’s only a matter of time before I’m hacked.  Some 0-day exists out there that I’m vulnerable to.  And, I choose to run Windows instead of Linux ( and let me tell you — as a data scientist, that’s such a pain.  TensorFlow, Python, etc are just such a pain to get working. ) — so it’s really a matter of time before either I’m hacked, Microsoft is Hacked, or one of my web services is hacked.  But so far, knock on wood, the mix of firewall settings, service shutdowns, encryption, backups, and web services, allows me to run “unpatched” on some of my systems (even on public networks )and remain uninfected.

By the way — this bothers me.  I have to take so many steps to keep ahead of bad guys, and I know I’ll lose one day.  It’s really just a matter of time.  I wish MS would do a few things:

  1. Unlink Microsoft Accounts(MSAs).  MS does a good job securing their network.  But linked MSAs are a recipe waiting for an exploit.  The bad guys don’t hack the PC — but hack the MSA system, and they’ve hacked, perhaps silently, all PCs using MSA login.
  2. Improve Windows Defender.  It’s just not very good.
  3. SKU-lock away AD policies.  “Windows home” shouldn’t allow group policy to disable command shell, ever — along with a host of anti-virus responses.
  4. Join/Unjoin Windows Update.  Allow old PCs that have turned off patching to rejoin whenever.  Technically, MS does this — but does it really badly.  If you fall behind enough, you can’t ever catch up, as WU will just stop working.
    1. Their answer is to force updates in Windows 10.  But this just pisses off users who don’t want to lose their computer for a day every 6 months or so as new, mostly non-security, patches are installed and existing preferences lost.  Who needs Cortana?  Why reset to Edge every six months?  The current approach is too heavy-handed and self-serving.

These 4 steps would greatly improve security for Windows.  Three of them are relatively easy, and MS could do them in a few months.  Some are hard, but can be done with acquisitions and smart policy.  I don’t know how many “blaster” or “wannacry” outbreaks are needed before MS does the right things to make security better for users without being self-serving.

May 9, 2017

Hypothesis — competing without barriers

Filed under: Uncategorized — ipeerbhai @ 5:14 pm

One of Porter’s 5 forces is the threat of new entrants.

The car rental business has been hemorrhaging cash because, well, anyone can lease you a car.  There’s the big guys — Hertz, Avis, Alamo, etc…  when you need to rent for a day or a week, maybe a month or two.  Then there’s the dealerships, who may short term lease you a car.  Then, there’s “Joe’s car rental”.  I don’t recall the name, but it was a real car rental place in my college town that actually rented to college students.  Old beaters — but the big guys don’t normally rent to people under 26 due to the insurance costs.

The problem car rental places have is this:  If they’re making good profits, then new entrants will follow with lower prices.  This is good — exactly what capitalism is supposed to want.  Remember, according to Adam Smith — profits are a sign of a problem.

But the question — how to compete with this?  You can do an “Uber” for car rentals, but I expect people want access to their own cars, and “Uber” is the best business possible here.

So, you try and compete on branding, efficiency of scale, or network effects.   Branding — “Pick Enterprise, we’ll pick you up.”  Network effects — no idea — maybe the “rent at location A, return at location B” — but network effects usually refer to demand side, not supply side.  Efficiency — “Buy lots of cars cheap, vertically integrate maintenance and fuel”.  There’s a new wrinkle in the efficiency game — efficiency of advertising.  The easiest business to win is to sell more to existing customers.  Cross-sell flights, hotels, and other travel services.  What else could be cross-sold?  Travel insurance?  How about car insurance?  “Dear Avis customer.  We need a lot of insurance on our cars.  We’ve partnered with Great Florida Insurance to create a bulk insurance package that we think could save you money.”  Gas?  But the real thing is to become a supplier of something.   Vehicle entertainment systems?  Burner pre-paid cell phones?

I would love to be a fly on the wall at Hertz’s executive level right now and see what they’re thinking.

April 20, 2017

Learn to be a Programmer!

Filed under: Uncategorized — ipeerbhai @ 5:39 pm

As an experienced Software Developer/Data Scientist/PM/Lead, I sometimes get the question, “How should I learn to program?”

Personally, I think that programmers should know a few different languages, but that comes from experience.  I saw this post on reddit, and I wanted to add a “+1” agreement to it.

https://np.reddit.com/r/learnprogramming/comments/5zs96w/github_repo_with_100_free_resources_to_learn_full/

In short — anyone who can write at an 8th grade level or better in any human language ( say English, or Chinese ) and can solve this equation:

3x + 1 = 7.  what does x equal?

has the intellectual ability to become a developer in about a year, and a good one in about three, if they follow this simple advice:

  1. Find a set of problems you’d like to solve.
  2. Pick a language/framework with lots of people who have worked on similar problems and have posted their solutions someplace.
  3. Set aside some time.
  4. Get some learning materials.
  5. Start coding.

With some focus, time, and self-forgiveness, you will get there.  The toughest part of programming is that you occasionally hit a “Valley of Despair” where nothing seems to work, and you don’t know why.  Having the emotional grit to get through the valley and the communications skills to find help ( even if its knowing how to work Google ) will get you there.

March 22, 2017

Unity Mesh and Materials Notes

Filed under: Uncategorized — ipeerbhai @ 2:13 am

These are my notes on how to make a Mesh with Materials from pure C# code in Unity.
digested from:

https://forum.unity3d.com/threads/create-materials-at-runtime.72952/
http://catlikecoding.com/unity/tutorials/constructing-a-fractal/
http://catlikecoding.com/unity/tutorials/procedural-grid/
http://catlikecoding.com/unity/tutorials/rounded-cube/

Definitions:

Space — A coordinated system to define points.

UV — A 2d Space normalized based on the image’s size.  The definition is always UV = (0,0) is the origin and (1,1) is the top left in image space.  To convert an image point from pixels to UV, have U = (chosen pixel x position)/(Image Pixel Width) and V = (chosen pixel y position)/(Image Pixel Height).  A Loose idea is that the UV value is the “%” towards the top right of a picture.

Steps seem to be:

  1. Generate a GameObject( aka GO ).
  2. Generate Geometry.
    1. Geometry is stored in the mesh property of a MeshFilter component attached to a GameObject.
  3. Generate Texture.
    1. Texture is stored in the material property of a MeshRenderer component attached to a GameObject.

Step 1: Make a gameobject with the needed components:

Two different ways to do this:  method one:

Add this decoration to the class that will be your GO.

[RequireComponent(typeof(MeshFilter), typeof(MeshRenderer))

Method two — use the API to add at runtime.

		gameObject.AddComponent<MeshFilter>().mesh = mesh;
		gameObject.AddComponent<MeshRenderer>().material = material;

 

Step 2:  Generate Geometry.

Meshes are made of a handful of ideas.  This means you have a handful of things to figure out to make a mesh:

  1. Where are the vertices in XYZ.
  2. What are the right normals to use per vertex.
    1. Which vertices should be duplicated to handle different faces using them.
  3. What is each vertex’s UV.
  4. What is each vertex’s curvature.
  5. What subfacets should your create to maximize your texture.

The first is that meshes are made of triangles.  You add the points in Unity’s world space, then link them in triangles by using the indices of each vertex in an array of Vector3.

NOTE.  You can generate submeshes in a mesh.  You specify the array of vertexes as normal, but instead of directly adding trianges, instead you set Mesh.SubmeshCount to your count of submeshes. Now, add triangles to each submesh instead of the main mesh.  Example code:
Mesh output = new Mesh();
output.subMeshCount = 2;
output.SetTriangles(m_TriangleLinesA, 1);
output.SetTriangles(m_TriangleLinesB, 2);

// if you’re not using submeshes, you can just add triangles by:
output.triangles = m_TriangleArray;

One part of generating the geometry is generation of the UV axis per vertex.  Here’s a good URL for UV generation on sphere’s: https://gamedevdaily.io/four-ways-to-create-a-mesh-for-a-sphere-d7956b825db4#.ga3iofdlb

For cubes, it’s easier.  There’s yet another method for cylinders.  It seems cube, cylinder, and sphere are the three approximations people use for generating their UV axis information.  You’ll use the vertex and normals to figure out the right axis position for UV.

Step 3: Generate texture.

 

 

 

 

 

 

February 26, 2017

My Unity/Daydream VR notes

Filed under: Uncategorized — ipeerbhai @ 4:13 am

Background

I’m over at the Seattle VR Hackaton sponsored by AT&T over the weekend, and decided to build a Daydream version of our DreamHUD hackathon project.  It quickly became apparent that this wasn’t going to work.  So, instead I decided to try and figure out how to implement Unity Daydream VR in any capacity at all.  I talked to 6 other developers here at the VR Hackathon, and none — I repeat none — got even “hello world” to boot on their Pixel/Daydreams using Unity and the Google VR SDK.  Almost all ended up with a black screen or compile failures.  I’m the only one who got something to build and deploy, but I’ve had no luck in getting and keeping a daydream app up and running with Unity.  I’m hoping my notes help others ( and myself ) get a working daydream VR app in the future.

Example Source Code:

I put the example source code on GitHub as a public project.

You can find the repo with bugs here:

https://github.com/ipeerbhai/DayDream101

then rebuilt it here with fewer bugs:

https://github.com/ipeerbhai/UnityAndroid101

Main issues

The main issues in getting Unity + Daydream working are:

  1. Install order seems to matter.  Installing things out of order results in “black screen of death” deployments.
  2. The controller emulator doesn’t work.  With some probing with adb, I was able to, a few weeks later, figure out how to get it to work.  Please see the troubleshooting section at the end of this blog post.
  3. The GvrEventSystem craps out during Play on Windows with the controller emulator.  As in the event pump either crashes the Unity editor, or the events just stop firing.
  4. Deploying to Cardboard results in a black screen.
  5. Poor documentation.  I thought MS was bad at MSDN docs — but they’re heaven compared to Google’s docs.  No example uses of any of their classes.  Even their own demos crash/blackscreen, so we can’t attach breakpoints and debug our way to figure out their APIs.

Notes

Installation:

Start with the Google instructions here:

https://developers.google.com/vr/unity/get-started

Here’s a few tricks I learned from failures along the way.

  • Make sure you have Android studio 2.2.3 or newer before you start install of Unity/JDKs.
  • For daydream controller emulator support in the Unity player, you must put ADB.exe in your system path variable.
  • open a cmd or shell window and run “adb devices” before starting Unity.  Unity’s player won’t be able to debug controller issues if you don’t.
  • Make sure you have Unity 5.6 or newer installed.
  • You must install Java SE JDK 1.8 along with Android SDK 24+ for daydream to work.  You can install the Android SDK from android studio, and the JDK from Unity.
    • Android Studio for SDK:  Click The Tools menu –> android –> SDK Manager
    • Unity for JDK: Edit –> Preferences… –>External Tools.  Click the little download button next the the JDK textbox.
  • Import the Google VR SDK unitypackage as the *very first thing* in the project!  Making a scene first, then importing the SDK will cause really hard to debug crashes.
  • On Windows, installing the Unity Tools for Visual Studio really makes script development in C# easier.
  • If you get a lot of controller object errors while running the player, stop the player and restart the scene.
  • Order operations really seems to matter.  Weird crashes and hard to debug issues seem to resolve if you change the order in which you install things or start unity.

After you’ve set the settings from the Google VR Guide, your main camera is now a VR stereo camera.  You can now create a scene.

Editor UI stuff:

On the upper right corner of the scene is a cube with some cones coming out of it.  That’s to control the scene camera in development mode.  Click the cube to enable right-click rotation, and click the cones to “look down” that axis towards the origin.

Questions and Answers:

How do I get a reference to a GameObject from a “Master” script attached to another gameobject?

Example Answer:  Create a GameObject in the Unity editor ( I created a cube, named it “Cube1” ).  In the master script’s Update function, I did this:

void Update () {
var Cube1 = GameObject.Find(“Cube1”);
}

How do I rotate this cube?

var cubeTransform = Cube1.transform;
cubeTransform.Rotate(Vector3.up, 10f * Time.deltaTime); // Time.detaTime is a unity provided static float that represents the time in seconds between calls to the update function.  The parameters seem to be a global-cooridinate axis ( Vector3.up is a unit vector [0,1,0] )and an arc-second.

What’s Unity’s cooridinate system?

Unity has X moving left/right, Z moving forward back, and Y moving up and down.  This is “Left hand rule” with the thumb as X, the index pointing up as Y, and the middle pointing forward as Z.

What’s the difference between a terrain and a Plane?
Digested from https://www.youtube.com/watch?v=Oc3odBj-jFA, and unity docs.

Terrains are defaulted to 500 x 500 meters in X and Z, with their “lower left”  set to 0, 0,  0.  You can deform them, and there’s a default material renderer with property settings the can mimic different types of ground ( like grass, sand, concrete. )  Planes are smaller, can’t deform, and don’t have a default texture.  Here’s a good shot of the inspector for terrain.

How do I make a “grassland with trees” texture onto the terrain?

  1. Import the Standard asset package to get some basic textures.  You can skip this step if you already have the texture you want.
    1. C:\Program Files\Unity 5.6.0b9\Editor\Standard Assets
  2. Select the PaintBrush tool in the terrain inspector.
  3. Click the “Edit Textures” button.
  4. Select “Add Texture”
    1. You can either click the “select” button and pick the asset in a flat list of textures.
    2. You can “drag and drop” the asset icon
    3. I picked, “GrassHillAlbedo.psd”
  5. Add the trees.
    1. Select the tree terrain “brush”.
    2. Click “Edit Trees…”
    3. Click add
      1. Pick one of the standard trees.
      2. Or, you can pick a tree you pre-modeled from the Unity tree modeler.

How do I make a sphere and Bounce it infinitely?
Digested from this video:
https://unity3d.com/learn/tutorials/projects/roll-ball-tutorial/moving-player?playlist=17141

  1. In the unity editor:
    1. Make a sphere of radius = 1 meter, position = 0, 0.5, 0.
    2. Attach a physics rigidbody component to it.
  2. In the MasterScript ( Or in the script for the object — I want to keep everything in one master script/gameobject, type this code in:
  3. // Update is called once per physics engine call, usually before update
    private void FixedUpdate()
    {
    // update all the forces we want to…
    var mySphere = GameObject.Find(“Sphere”);
    var theRigidBody = mySphere.GetComponent();
    if (mySphere.transform.position.y < 0.51)
    theRigidBody.AddForce(0, 300, 0, ForceMode.Acceleration);
    }

How do I enable Controller Support?
Digested from:
https://developers.google.com/vr/unity/controller-support

  1. Create an empty GameObject and name it Player.
  2. Set the position of the Player object to (0,1.6,0).
  3. Place the Main Camera underneath the Player object at (0,0,0).
  4. Place GvrControllerPointer underneath the Player object at (0,0,0).
  5. Set the position of the Main Camera to be (0,0,0).
  6. Add GvrViewerMain to the scene, located under GoogleVR/Prefabs.
  7. Add GvrControllerMain to the scene, located under GoogleVR/Prefabs/Controller.
  8. Add GvrEventSystem to the scene, located under GoogleVR/Prefabs/UI.

At the end of this, you’ll have a “laser pointer” on your “right hand” in your app.

How do I know what the DayDream controller is pointing at?

Digested from https://www.youtube.com/watch?v=l9OfmWnqR0M

There are two ways I’ve found to do this.

Method 1:  Use raycasting.

  1. Get the controller’s position be treating the controller as any gameobject.
    1. GameObject controllerPointer = GameObject.Find(“GvrControllerPointer”);
      Transform controllerTransform = controllerPointer.transform;
      Vector3 pos = controllerTransform.position;
    2. Can also get positoin in one line of code:
      Vector3 controllerPosition = GameObject.Find(“GvrControllerPointer”).transform.position;
  2. Get the controller’s orientation and create a forward pointing vector from the orientation quaternion.
    1. Vector3 fwd = GvrController.Orientation * Vector3.forward;
  3. Use phyiscs.Raycast to see what the controller is pointing at.
    1. RaycastHit pointingAtWhat;
    2. Physics.Raycast(pos, fwd, out pointingAtWhat);

 

Sample code: ( compiled and verified )

void Update ()
{
// find the bouncing sphere from inside this central game object.
var MySphere = GameObject.Find(“Sphere”); // Can skip getting if this script component is attached to the GameObject that will be the target.
var MySphereTransform = MySphere.transform;

// find the controller and get its position.
var controllerPointer = GameObject.Find(“GvrControllerPointer”);
var controllerTransform = controllerPointer.transform;

// use the controller orientation quaternion and get a forward pointing vector from it, then raycast.
Vector3 fwd = GvrController.Orientation * Vector3.forward;
RaycastHit pointingAtWhat;
if (Physics.Raycast(controllerTransform.position, fwd, out pointingAtWhat) )
{
var theTextGameObject = GameObject.Find(“txtMainData”);
UnityEngine.UI.Text theTextComponent = theTextGameObject.GetComponent<UnityEngine.UI.Text>();
theTextComponent.text = “hit ” + pointingAtWhat.collider.name;
}
}

Method 2: Use the Unity event system as modified by Google.

Step 1:  Add the GVREventSystem script to your scene, and add GvrPointerPhysicsRaycaster to your main camera.

Step 2:  Inherit from IGVRPointerHoverHandler on any gameobjects you want to receive a notification that the controller is pointing at like this:

public class myGameObject : MonoBehaviour, IGvrPointerHoverHandler {

public void OnGvrPointerHover(PointerEventData eventData) {

// myThing is now “hovered” by the controller pointer.  Do logic now.
// eventData contains a point of intersection between the ray and the object, along with a delta ( magic? )

}

// WARNING — this breaks in Unity 5.6.b11, but works through 5.6.b10.  Bug?

}

How do I rotate the camera around the world with the touchpad ?

Please note — don’t actually rotate the camera via the touchpad — it makes people sick fast.  This code is really only useful on the PC/control emulator to test input.

// Handling camera rotation with the controller touchpad needs these concepts:
// 1.The touchpad is an X/Y device.  (0,0) is top left.
//      X = 0 means “furthest left touch”.  X = 1 means “furthest right”.
// 2. We need a large “dead zone” around X = 0.5 to prevent jerky movement.
// 3. You cannot rotate the camera directly.  Google’s APIs reset any camera rotations.
//     Instead, put the camera in something, then rotate that something.

void Update()  // cut and paste, but modified, from working code.
{

float m_rotationSpeed = 10.0f; // normally a class member, put here for demo purposes.
float deadZone = 0.15f;
var player = GameObject.Find(“Player”); // the object containing the main camera.
if (GvrController.IsTouching)
{
if (GvrController.TouchPos.x < .5 – deadZone)
{
// Should be rotating left
player.transform.Rotate(0, -1 * Time.deltaTime * m_rotationSpeed, 0);
}
else if (GvrController.TouchPos.x > .5 + deadZone)
{
//Should be rotating right
player.transform.Rotate(0, 1 * Time.deltaTime * m_rotationSpeed, 0);
}
}
How do I hit the sphere with a “pool stick” on app button press?

In this scenario, we’re using the controller as a “pool stick” to hit the bouncing sphere and move it when the user pushes the app button.  Some learnings.

  1. AppButtonDown is only true for one frame — the frame when the user pushed the button.  This is a problem with a bouncing sphere, because the user may not have hit the bouncing ball when pushing the button.  Instead, we’ll use the boolean that’s always true, and add force as long as the button is down.
  2. GvrController does not expose position, so we have to use the GvrControllerPointer Prefab in Assets/GoogleVR/Prefabs/UI/GvrControllerPointer.prefab attached to a “Player” object.

private void FixedUpdate() // modified from working code.
{
// find the bouncing sphere from inside this central game object.
var mySphere = GameObject.Find(“Sphere”); // Can skip getting if this script component is attached to the GameObject that will be the target.
var sphereRigidBody = mySphere.GetComponent<Rigidbody>();
var MySphereTransform = mySphere.transform;

// find the controller and get its position.
var controllerPointer = GameObject.Find(“GvrControllerPointer”);
var controllerTransform = controllerPointer.transform;

// use the controller orientation quaternion and get a forward pointing vector from it, then raycast.
Vector3 fwd = GvrController.Orientation * Vector3.forward;
RaycastHit pointingAtWhat;
if (Physics.Raycast(controllerTransform.position, fwd, out pointingAtWhat))
{
if (GvrController.AppButton)
{
Vector3 forceToAdd = GvrController.Orientation * Vector3.forward * 100;
sphereRigidBody.AddForceAtPosition(forceToAdd, pointingAtWhat.point);
}
}

// update all the forces we want to…
if (mySphere.transform.position.y < 0.51)
sphereRigidBody.AddForce(0, 300, 0, ForceMode.Acceleration);
}

 

How do I display text in world view?

Digested from:
https://blogs.unity3d.com/2014/06/30/unity-4-6-new-ui-world-space-canvas/

Text in unity is rendered on a canvas.  This is a problem, because the GvrController class has a canvas.  So, if you create –> ui –> text, you’ll bind that text to your controller.  Instead, you have to make a canvas in world view, then adjust the scale from pixels to world units ( aka meters ).

  1. Create a new canvas using Create –> UI –>Canvas.  Give it a name.
  2. Select the Canvas In the Scene, then look at the Canvas Component in the Inspector.  Change the “Screen space overlay” to “World space”.
  3. The canvas is a gameobject — you can move it like you want.  But, don’t change the size property.  Instead, scale the canvas down to a reasonable size.
  4. With the canvas still selected, do create –> UI –> Text.  This will put text in the canvas.  Select the color and properties of the text in the inspector.

How do I start recording audio while the user has the app button down?

This turns out to require a new project and updating the Google SDK.  In the old SDK, Microphone permissions couldn’t be acquired, but now existing sample code works.

  1. Add an audio source to the component that is going to record.
  2. Decorate the GameObject source code for the recording component like this:
    1. [RequireComponent(typeof(AudioSource))]
  3. Add a private AudioSource to your GameObject derived class:
    1. private AudioSource microphoneAudioSource = null;
  4. Check for AppButtonDown in your Update function:
    1. if (GvrController.AppButtonDown) { // statements }
  5. Create a ringbuffer and call Microphone.Start like this:
    1. microphoneAudioSource.clip = Microphone.Start(null, true, 10, 16000);
      microphoneAudioSource.loop = true;
  6. Finish recording on AppButtonUp like so:
    1. if (GvrController.AppButtonUp) {
      int recordingPosition = Microphone.GetPosition(null); // do before calling end!
      Microphone.End(null);
      }
  7. AudioSource.clip will now contain the AudioClip.

 

Troubleshooting:

Problem: controller emulator won’t connect in the player.
Solution: Controller emulator device must be the first device that lists in adb devices.  This is a problem, in that some services on the host take port 5555 on up, and adb will see those sometimes.  Try running adb kill-server, then adb devices with your emulator phone attached.

Problem: Can’t install apk packages built on different PCs from same source code.
Solution: must uninstall the previous packages first.  You can use the android package manager (pm) to find the previously installed package, then run the uninstall command like so:
adb shell pm list packages
adb uninstall <com.xxx.yyy> (aka your old package)

Problem: Can’t get Mic input.
Solution: reinstall the GVR assets and build for non-VR Android target, run without VR, then re-enable Android VR target.  This seems to be caused by a bug in permission management in VR vs stock android.  Once your app has the perms, it keeps them.

 

December 12, 2016

Town revitalization

Filed under: Uncategorized — ipeerbhai @ 8:11 pm

I’ve been thinking how economically disadvantaged cities and small communities can revitalize their towns.  This begs the question — what makes a good town?

Do good schools make a good town, or is it backwards — does a good town make good schools?  Do good jobs make a good town, or does a good town make good jobs?

Historically, good towns have sprung up around two forces: natural transportation interfaces and global export capacity.  Natural transportation interfaces are ways to ship things by boat — Ports and rivers.  Global export capacity is just that — making a product that can be sold globally ( computer software, oil, etc… ).

So, imagine that you’re a suburb or small town someplace, whose chief export is your people.   Your town used to make something sold wide and far — but that product’s factory closed or moved away.  Now, the town has surplus productive population for the local demand pool.  What use is a town full of genius-level people that don’t have a way to package their genius into products that can be shipped anyplace?

And, I do believe that most people are Genius level.  To prove it: https://en.wikipedia.org/wiki/Flynn_effect — Average IQ ( that is mean IQ ) has increased 60 points since 1930.  The average IQ, using today’s scale, of people in 1930 would be 70.  The same scale would have “normal” people today at 130.  70 was considered “mentally retarded”, and 130 is considered borderline genius!  How did the average change so much?  The answer is that society asked more of people, and so they rose to the occasion.  This creates a catch-22:  if you don’t ask more of people, they don’t improve.  But, if you ask too much — test too often/finely, then they fail to improve.

So, to revitalize a failing town, you need to create a seed where people can ask more of themselves, but on their own.  Here’s my hypothesis for a recipe how that a government can implement:

  1. Establish a makerspace and course offerings at different levels of abstraction.  It’s not difficulty that’s hard, it’s abstraction.  Driving is very hard — we barely can teach robots to do it now.  But, almost everyone can learn it — it’s very concrete.  Conversely, calculus is often hard to learn, even though it’s straightforward ( in fact, an equation solver is one of the first AI proofs that all calculus can be solved with only a handful of steps put into the correct sequence: https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-034-artificial-intelligence-fall-2010/lecture-videos/lecture-2-reasoning-goal-trees-and-problem-solving/ )  The difference?  Abstraction.  Driving is concrete, with simple words and simple controls.  Calculus is abstract, with complex words and a search tree of possible steps.
  2. Establish a business incubator near, but not in, the Makerspace.  Incubators are essentially tables, meeting spaces, nap spaces, coffee, Internet, entertainment, and art.  You should be able  to get a quick nap at one.  You should be able to work securely at one, without fear that your laptop will get stolen.  You should be able to feel inspired.  It has to be quiet — no loud machines.  Think Starbucks — but you don’t feel bad if you’re there all day.  Subsidize coffee, drinks, and meals.  Allow businesses to carve out private spaces of the public space ( with rent payments, of course ). Allow individuals to work there, with token payments to keep out the homeless/shiftless.
  3. Establish a few different grant types.  Grant type 1 — the “search” grant.  This is to an individual, about $20k.  People apply to the grant, and show effort.  Success is not the way to award the grant — effort is — find metrics that are hard to game, that show effort.  A good example is a letter of reference from someone respected in the community.  You’ll need to give away about 10-20 of these grants a year.  A second type of grant is a seed grant.  40-60k, given for traction.  That is, either sales/sales growth, or user growth.  You’ll need to give away 3 of these a year.
  4. Market the heck out of all three.  Have schools do field trips to the makerspace.  Have meetups hosted in the incubator, for free.  Give away pamphlets about the grant program.  You’ll need to get about 400 people into the grant pipeline every year, award search grants to 10-20 of them a year, and award traction grants to 3.

This is all super cheap for a community, esp one that has building around and some tax base left.  We’re talking 100K for the makerspace up front, with maybe 60K/yr in operatiing expense.  The incubator is likely even less.  The grants are the most expensive thing, and we’re talking 400K in “search” grants, and 200k in traction grants.  The grants should be secured grants — secured by equity/equity options.

If you do this — less than a million dollars a year — over 5 years, you’ll develop new businesses and change the dynamic of your town.  This process is the seed around which small businesses will form.  Some of those will grow, perhaps quite large.  It’s the formula VC’s use, but are not fully cognizant of.  If your town has been in decline for a long time, then you may need to start without the grants. Those grants keep people afloat while they start a business — without them, you’ll get higher drop-out rates — but you’ll eventually find success.  And success means job growth!

November 21, 2016

Steps to Install TensorFlow with GPU on Windows

Filed under: Uncategorized — ipeerbhai @ 1:11 am

I normally use Encog and a self-written learning framework for when I do audio pipeline learning.  I’ve been tempted by CNTK and TensorFlow.  CNTK uses tools whose license is, sadly, too restrictive.  TensorFlow’s ecosystem is more in-line with what I need.

I’m a windows guy, and I can use TensorFlow(TS) via docker.  But, I want to use my GPU.  I have a CUDA compliant GPU on one of my machines along with Windows 10 and Visual Studio Community.  The official readme is designed for VS Pro, not community.  The key difference is that VS Community doesn’t officially support TensorFlow 32 bit with CUDA, only 64 bit.

Here’s the steps I’ve figure out so far:

Prerequisites.

You’ll need SWIG, CUDA, the Nvidea NN library for Cuda, Git, CMake, Python3.5 and numpy 1.11.  You can use Anaconda to satisfy the python/numpy requirement.  Install Anaconda, then conda install numpy in an elevated command prompt.  The rest, you’ll have to download installers and install.  Oh, and Visual Studio Community 2015.  I’ll assume a default install drive of C:  I’ve adapted the steps from the official Github here:

https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/cmake/README.md

You’ll want to read that first, as the changes are pretty minor.

Also, if you have python 2.7, then remove Python 2.7 from your path — it’ll interfere with CMAKE.

Steps

  1. launch a CMD window and setup the environment.
    1. Put all the above pre-reqs in your path environment variable, except for Visual Studio.
    2. run “C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin\amd64\vcvars64.bat”  ( This is changed from official docs )
    3. Put CMAKE in your path: set PATH=”%PATH%;C:\Program Files\CMake\bin” ( different than docs)
  2. Git pull tensorflow, change into the CMAKE dir, then build this CMAKE invocation line:
    1. cd /d “C:\temp\tensorflow\tensorflow\contrib\cmake\build”
    2. cmake .. -A x64 -DCMAKE_BUILD_TYPE=Release -DSWIG_EXECUTABLE=”C:/tools/swig/swig.exe” -DPYTHON_EXECUTABLE=C:/Users/%USERNAME%/AppData/Local/Continuum/Anaconda3/python.exe -DPYTHON_LIBRARIES=C:/Users/%USERNAME%/AppData/Local/Continuum/Anaconda3/libs/python35.lib -DPYTHON_INCLUDE_DIR=”C:/Program Files/Anaconda3/include” -DNUMPY_INCLUDE_DIR=”C:/Program Files/Anaconda3/lib/site-packages/numpy/core/include” -Dtensorflow_ENABLE_GPU=ON -DCUDNN_HOME=”C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v8.0″
  3. In theory, you can MSBuild the resulting vcproj from step 2.  I’ve found some build breaks, so I’ll update this post when I’ve figure it out.  Here’s the list so far:
    1. Severity Code Description Project File Line Suppression State
      Error C1083 Cannot open include file: ‘tensorflow/cc/ops/image_ops.h’: No such file or directory tf_label_image_example c:\temp\tensorflow\tensorflow\examples\label_image\main.cc 38
    2. Severity Code Description Project File Line Suppression State
      Error C1083 Cannot open include file: ‘tensorflow/cc/ops/array_ops.h’: No such file or directory tf_tutorials_example_trainer c:\temp\tensorflow\tensorflow\cc\ops\standard_ops.h 19
    3. Severity Code Description Project File Line Suppression State
      Error LNK1104 cannot open file ‘Debug\tf_core_gpu_kernels.lib’ grpc_tensorflow_server C:\temp\tensorflow\tensorflow\contrib\cmake\build\LINK 1
    4. Severity Code Description Project File Line Suppression State
      Error LNK1104 cannot open file ‘Debug\tf_core_gpu_kernels.lib’ pywrap_tensorflow C:\temp\tensorflow\tensorflow\contrib\cmake\build\LINK 1

 

Docker, Tensorflow, and scikit-learn on Windows

Filed under: Uncategorized — ipeerbhai @ 12:58 am

I wanted to play around with the docker version of tensorflow while I’m trying to fix build breaks on the gpu-accelerated Windows TS deployment I’m playing with.

There’s already a TS docker image.  I needed to get it and modify it.  Here’s the steps I did to do that.

Perquisites:

Have a Windows Machine running Docker, either via VirtualBox or Hyper-V.  You’ll need to know how to set a port forwarding rule to the default docker VM.

Steps:

  1. pull the image.   “docker pull  gcr.io/tensorflow/tensorflow”
  2. run the image “docker run -it -p 8888:8888 gcr.io/tensorflow/tensorflow”
  3. Exec a shell:
    1. “docker ps” ( find the container ID )
    2. “docker exec -it [IMAGE] bash
  4. install scikit-learn via pip in the image: pip install scikit-learn
  5. exit the bash shell
  6. Create a port forward rule from localhost:[PORT] to [default:8888].
  7. shutdown the jupyter notebook running by default in the TS image.
  8. docker commit [IMAGE] docker-local

You can now run the image you created with:

docker run -it -p 8888:8888 tensorflow-local

This gives you a jupyter notebook server with TS and scikit-learn as a docker machine.

Now, if only nvidea-docker would work on Windows.  A man can dream…

Next Page »

Blog at WordPress.com.