Team “Kinoma Interns” at IoT World Hackathon

Claire Tuna
  

Claire Tuna

July 18, 2014

Recently, our summer interns competed in a two-day hackathon for an Internet of Things World hackathon in Palo Alto. This is the true story of how a fresh-faced team of new recruits planned, built and demonstrated their first Kinoma project.

Tuesday, 11 am: Brainstorming

Challenged to create either a product in either “consumer” or “environment” categories using the Marvell 88MC200 microcontroller, our first move was to brainstorm.

pitch (fixed)

With a hot pink sharpie, we sketched out a bunch of ideas, including the “Internet of Babies”, an RFID-powered doggie door, and public restrooms that won’t let you out until you use hand sanitizer.

brainstorming

Tuesday, 3 pm: Idea Selection

After voting and discussion, we decided to work on context-aware advertising. To us, this means posters and billboards that know where they are, and can share live characteristics—such as sensor data and web service data—with their advertisers.

Advertisers, in turn, can bid on spaces based on their profile at a given time. For example, a company may bid more on a display because there is a traffic jam nearby (in which case the ad is likely to be seen by more people).

In another use case, a sunglasses company can bid on spaces with high luminosity.

For our demo, we decided to build a display that changes its content based on the temperature. When the temperature is cool, it shows an ad for sweaters. When it’s hot, it flips to an ad for frosty-cold soda.

Tuesday, 6 pm: Hardware Hacking

Fueled by nachos and energy drinks, we got our hardware up and running. We ran the Kinoma JavaScript interpreter on the 88MC200 to read the temperature from a pressure/temperature sensor and relayed that data via WiFi to the Kinoma Create. With this data, the Kinoma Create controlled servo motors to flip the display between one side and the other.

To help us debug and display the project, we whipped up an app in KPR that showed the temperature and a button for flipping the display.

hardware (fixed)

Tuesday, 10 am: X-Acto Time

As the presentation drew nearer, we crafted the enclosure for the hardware and glued the example advertisements on two pieces of styrofoam which represented a billboard.

prototype

Wednesday, 3 pm: Pitching the Project

We got to see a lot of cool demos from the other participants. One team presented a mouth guard that keeps track of teeth grinding, and another group used their device to measure the water quality of the pool at the event venue.

During our live demo, our teammate John simulated the weather getting warmer by blowing hot air onto the sensor. Reading the temperature output on the Kinoma Create, the crowd went wild when John hit the threshold of 82 degrees and flipped the ad to the Coca-Cola side.

canitbedone

Wednesday, 4 pm: The Results

We were very happy that our peers enjoyed the demonstration, and honored to be awarded first place in the prize for the “Built Environment” category.

wedidit

Key Takeaways

The Kinoma Create really shined as a presentation tool for the demo. Rather than a mess of breadboards and wires, the hardware was packed neatly into the Kinoma Create’s enclosure. We were able to keep working on the software until the very last seconds because the Kinoma Create looked polished already.

Another strength of the Kinoma Create was its ability to communicate with the audience during the demo. With the temperature of the sensor displayed in big font, the entire audience could tell what was going on inside the hardware. Without the screen, there would have been major dead air while we were waiting for the temperature to hit the threshold to switch the display.

In short: The Kinoma Create allowed us to make a complete, sophisticated prototype very rapidly, and proved ideal for demonstrating our prototype to others.

“Kicking Down Silos: Co-Designing Software & Hardware to Create Great Products”

Charles Wiltgen
  

Charles Wiltgen

July 11, 2014

Thanks to our friends at O’Reilly, you now have access to video of Andy Carle’s entire talk from Solid: “Kicking Down Silos: Co-Designing Software & Hardware to Create Great Products.” Andy is Kinoma’s resident UX Strategist and Usability Scientist, and this talk comes straight out of his ongoing experience developing Kinoma Create.

By watching this top-rated talk, you’ll learn how to carve a clear path from concept, to prototype, to hardware product by:

  • Preserving progress between prototypes
  • Making user tests as authentic as possible
  • Ensuring small jumps between prototype generations

Please enjoy and share!

If you want to review the presentation slides independent of the video, they are below (and downloadable in PDF).

O’Reilly Solid is a new annual event focused on the intersection of software and hardware, exploring how we’re all about to experience a profound transformation because of the creation of a software-enhanced, networked physical world.

IoT Needs Open Source Solutions to Succeed

Ian Skerrett
  

Ian Skerrett

July 9, 2014

This is a guest post by Ian Skerrett, who runs marketing activities for the Eclipse Foundation. He supports projects and member companies to increase the awareness of all the cool stuff happening at Eclipse.

Eclipse-logo-2014

Eclipse is a community for individuals and organizations who wish to collaborate on commercially-friendly open source software.

The Internet of Things (IoT) is the current ‘in thing’ for the technology industry. Vendors large and small are rushing in with products and solutions ranging from wearables, to connected cars, to industrial automation.

IoT is impacting a wide range of industries and will have lasting impact for years to come. However, to ensure this success, the IoT will need to embrace open standards and open source software.

Being Open Wins

The current IoT industry is characterised by a number of proprietary solutions from companies that might have an open API, but no chance of connecting or communicating with another proprietary solution. In essence, we have a number of solutions to build Intranets for Your Things. We need to do better, and an open approach is the way to go.

The IoT industry needs to learn from the history of the Internet: being open wins. We would not have the Internet we have today if Tim Berners-Lee decided to patent his inventions and start a VC-funded company to take on Compuserve or AOL, the giants of the day. The Internet runs on open source implementations (ex Linux and Apache http) and open standards. To succeed, IoT will need to do this, too.

For IoT to succeed, interoperability must be a given.

Building Blocks

I advocate focusing on a core set of open building blocks and tools that will be used industry-wide, based on:

  • Open standards
  • Open source implementations of these standards
  • Open source frameworks that make it easy for developers to build IoT solutions

No single company should control these building blocks and certainly no one company should profit from them. The building blocks need to be open for anyone to use, without having to ask for permission.

Developers are the Engines of Innovation

Developers will be the driving factor that compels the IoT industry toward an open approach, because they are the engines of innovation and adoption. To attract developers to a new technology, you need to have very low barriers to entry. Open source provides the perfect mechanism for engaging with developers and keeping barriers to adoption very low.

Openness for IoT is Underway

Companies and individuals are already building an open community for IoT. The Kinoma team inside Marvell has started the steps down an open road.

At Eclipse, we are building an open source community to provide some of the basic technology building blocks for IoT. Eclipse IoT has 15 different open source projects, including implementations of popular open IoT standards, MQTT, CoAP, and Lightweight M2M. We also provide open source frameworks for building IoT gateways, home automation solutions and SCADA solutions. The goal is to become the place for developers and companies to collaborate on building open source technology for IoT.

A Two-Year Migration

Over the next one to two years, expect to see the industry migrate to a more open approach. The current closed proprietary approach is too expensive and complicated for anyone to implement. History has demonstrated that open wins.

Open standards and open source must be part of the industry’s overall strategy to ensure that IoT truly succeeds.

Facebook’s Human Lab Rats: The Ethics of Experimentation Without Informed Consent

Andy Carle
  

Andy Carle

June 30, 2014

Dr. Andy Carle is User Experience Architect at Kinoma. His PhD is in Computer Science with a focus on Human-Computer Interaction. Andy has a strong background in experimental design and qualitative research methodologies, and has been designing and running user studies for more than a decade.

Facebook’s study on emotional contagion

Facebook performed a massive online experimental intervention without obtaining proper informed consent. I am extraordinarily troubled by this recent news. The resulting paper is available for review. In short, the authors of the study wrote code to manipulate the Facebook news feeds viewed by 689,003  users over the course of one week in early 2012. For half of the users, the amount of posts with positive emotional content shown to them was intentionally reduced while the other half saw a reduction in posts with negative emotional content.

This manipulation has been called a study in “emotional contagion,” and it caused a small–but statistically significant–impact on the professed moods of the target users. The participants shown more positive posts that week posted more positive things themselves, while those shown negative posts posted more negatively.

The issue is a lack of informed consent

This study and its results are both useful and interesting. However, the manner in which the study was conducted was completely inappropriate, and raises serious questions about how Facebook is designing, approving, and executing experiments to be conducted on their massive user base.

The issue to focus on is informed consent: did the participants in this study know enough about it to make an informed decision to participate and was that consent properly obtained? The authors–whom I don’t know personally but are reasonably well known (and particularly well liked) in the social psychology and CHI communities–say that the answer is “yes.” They claim that Facebook’s terms of service and data use policy permit such an experiment. Indeed, there is no reasonable question that such an experiment is within Facebook’s legal rights.

But that is where I would draw the line: legal, but decidedly not ethical.

facebook-dislike

UC Berkeley’s definition of informed consent begins:

“A person’s voluntary agreement, based upon adequate knowledge and understanding of relevant information, to participate in research…”

This “adequate knowledge” is understood to include items that were not conveyed to the participants in this 2012 Facebook research: risks of participation in the study, potential benefits to humanity from the work being conducted, potential personal benefits of participation, alternatives to participation, etc. Without the research participant having been given adequate knowledge and understanding of the study they are participating in there can be no informed consent, and any assertion to the contrary is extremely suspect.

But was informed consent really necessary? Yes.

This opens a secondary question for consideration: was informed consent necessary for this study? The acceptable reasons for skipping informed consent are detailed, but in general are: 1) that the research involves no more than minimal risk to its participants and 2) that the research is merely a study of something that was happening anyway; that is, it involves no intervention on the part of the researchers.

No rational argument could propose that the study at hand meets either of these criteria. On the question of risk, I would suggest that the risks involved in seeing negative vs. positive news feed posts was precisely the construct being studied in this experiment. The authors had no idea what risks they were taking with their participants because they were trying to answer that question themselves. Therefore, consent was necessary.

It was a social psychology experiment

The second question is more interesting and contains more room for debate. Many people have quite rightly pointed out that a strict adherence to rules of informed consent would make A/B testing of user experience design decisions impossible at scale. This is where it becomes tricky. Facebook can (and certainly does) show different types of feeds to different users all the time in an effort to improve their product. And, indeed, I would grant considerable leeway here if this study were strictly analytical/descriptive in nature, especially as it appears that proper steps were taken to avoid disclosure of personal information. But this line of reasoning falls apart for this particular study if examined at any length.

What the researchers were looking at was not something that Facebook’s UX team would have been doing as a part of their normal business: rather, it was explicitly a social psychology experiment designed to determine an underlying fact about human nature, not merely to inform design decisions.

This was a research intervention in the classic sense and should be treated as such. Proper informed consent was not sought when the intervention was explicitly hypothesized to impact people’s emotional state. This is deeply disturbing.

No Institutional Review Board would have approved this research protocol as it ended up being executed. Researchers trained for working with human subjects must know better. And the ones who do not do this part of the job right damage the reputation and credibility of an entire profession.

How Facebook can correct course

It is perfectly understandable why Facebook has an in-house social psych research group interested in running these sorts of experiments. With a little oversight, there would be nothing wrong with doing so. If Facebook wants to correct course, they need to quickly establish oversight in a formal and transparent way:

  • Facebook’s research efforts should be governed by an IRB composed of both internal and external individuals from the HCI/social psych world.
  • Every potential study should be presented to this IRB for approval and decisions from the IRB should be binding.
  • Approvals from this IRB should be publicly disclosed as quickly as is reasonably possible, given reasonable consideration for IP and design concerns.

This is the only way to save face on this debacle and ensure some reasonable sense of ethics going forward.

— Andy

P.S. Some late additions as this story develops:

  1. The lead author on the paper has made a Facebook post responding to criticism of this study. In it, he notes that “While we’ve always considered what research we do carefully, we (not just me, several other researchers at Facebook) have been working on improving our internal review practices. The experiment in question was run in early 2012, and we have come a long way since then.” It is encouraging that this is an issue being taken seriously, but there needs to be dramatic transparency in these “improved” review practices to restore credibility. I remain troubled that they are not backing away from their claims that a generic Terms of Service agreement constitutes informed consent for social psychology intervention studies.
  2. Some media outlets are reporting that the IRB at Cornell University reviewed this study protocol before their faculty member officially became involved with the project. I don’t have enough information here to make a fully informed assessment of this claim, but here is what I think is being reported: I believe that Cornell’s IRB approved of their faculty member getting involved in the analysis of this data after it had already been collected by Facebook. It seems to have been approved on the exemption for pre-existing data sets, which means that the IRB did not make a judgement on the appropriateness of the methods used in the collection of the data. I’m going to guess that this is a decision that IRB would like to have back.

Hoedown! with “The Savages” and Kinoma Create

Andy Carle
  

Andy Carle

June 20, 2014

Hoedown! is a cowboy-themed roleplaying game invented by the co-founders of Savage InternetValkyrie Savage and Evan Savage—and realized with Kinoma Create. Based on how much it was enjoyed by SXSW attendees on the expo floor, we think it’s a great example of Kinoma Create’s ability to help develop experiences as well as devices.

About the Creators

Valkyrie is a PhD student in Computer Science at UC Berkeley with extensive hardware experience. Evan has deep experience with software and is drawn to interesting technologies and projects. Together, they were an ideal team to explore what’s possible with Kinoma Create.

As a software developer, Evan felt Kinoma Create was more welcoming and productive than starting with bare development boards, and liked that he could wirelessly upload and debug his code. Valkyrie was impressed with the interactivity possible with the device. She called its built-in screen “a total luxury” for being able to easily display feedback from her sensors.

The game in three parts

The Hoedown! game is played in three stages:

  • Testing — In less than 5 minutes, players’ “cowboy skills” were assessed with the help of Kinoma Create connected to various sensors (see Technical details for more)
  • Branding — Based on testing, players were “branded” as one of seven character types
  • Playing — Players could meet and play with other characters in the game to win prizes

hoedown-collage

Sensored

Hoedown! uses four different sensors and the Kinoma Create touchscreen to determine ability scores:

  • A temperature sensor was used to measure the player’s hand temperature (Charisma)
  • Blood Alcohol Content sensors tested whether players had ingested alcohol recently (Constitution)
  • A stretch sensor measured the size of the player’s head (Intelligence)
  • A flex sensor tallied how rapidly a player was able to squeeze a hand strength trainer (Strength)
  • The touchscreen allowed players to solve a maze (Dexterity) and to sketch the user’s vision of an idealized beard (Wisdom)

The temperature sensor uses Kinoma Create’s I2C pins, while the other three sensors use analog inputs.

Code details

Here is the code for the I2C-based temperature sensor:

//Initialize the I2C pins & set the I2C slave address
i2c.init( 27, 29 );
i2c.setSlave( 0x5A );

// Read the temperature data from the sensor
tempData = i2c.readWordDataSMB( 0x07 );

And that’s it! There was some minor work to do to convert the sensor’s output format into a temperature in a common unit of measurement, but this is all that was needed for reading the value from the sensor.

Next, let’s take a look at the code for the analog-based stretch sensor:

// Initialize the analog-to-digital pin
a2d.init( [ 47 ] );

// Read the analog value (0-1) of the stretch sensor
var VOut = a2d.read( [ 47 ] )[ 47 ];

Again, there was some math still to do to convert the number output by the sensor into a meaningful head circumference. But reading the sensor value itself is essentially trivial.

savages

Focus on the fun stuff

The Savages’ unique, crowd-pleasing project is a good example of how Kinoma Create shines at making the “plumbing” of your project easy so you can focus on the creative part.

How can I get my own Kinoma Create?

You can pre-order Kinoma Create right now. The product will be shipping this fall, and in the meantime you can get started with Kinoma Studio today.