July 28, 2014
O’Reilly’s OSCON 2014 was just held in Portland. I spent a few days there at the invitation of the event organizers.
OSCON proposes that open source has evolved “from disruption to default.” I attended the conference to better understand the open source movement, specifically as it relates to the work of the Kinoma team.
We use a lot of open source software in Kinoma Create, which is a Linux device at heart. And we have begun releasing code and hardware designs under open licenses. But we are just starting the process of growing a community around that, so there’s a lot to learn.
What impressed the most at this year’s OSCON
- Catherine Farman and Corinne Warnshuis of Girl Develop It explained the successful steps they have taken to encourage more women to get involved in open source. Their approach to mentoring looks like an effective way to move more people—of all genders—off the sidelines of open source.
- Michael Enescu, the CTO of Open Source Initiatives at Cisco, spoke of the importance of open source to the Internet of Things. He made the bold assertion that “All IoT will be open source” because that openness brings credibility. Great words to hear from the largest networking company on our planet. Michael also talked about the importance of open protocols, including MQTT. I’m warming up to MQTT, and Andy Piper’s “A Walking Tour of MQTT” showed the momentum growing behind it.
Kinoma Create at OSCON’s Hardware Showcase
The OSCON Hardware Showcase made its debut this year. Were were selected to share how Kinoma Create incorporates open source and communicates using open protocols.
All in all, OSCON brought together a dynamic, extremely technology-savvy audience welcoming of our new ideas. We look forward to more involvement next year.
Recently, our summer interns competed in a two-day hackathon for an Internet of Things World hackathon in Palo Alto. This is the true story of how a fresh-faced team of new recruits planned, built and demonstrated their first Kinoma project.
Tuesday, 11 am: Brainstorming
Challenged to create either a product in either “consumer” or “environment” categories using the Marvell 88MC200 microcontroller, our first move was to brainstorm.
With a hot pink sharpie, we sketched out a bunch of ideas, including the “Internet of Babies”, an RFID-powered doggie door, and public restrooms that won’t let you out until you use hand sanitizer.
Tuesday, 3 pm: Idea Selection
After voting and discussion, we decided to work on context-aware advertising. To us, this means posters and billboards that know where they are, and can share live characteristics—such as sensor data and web service data—with their advertisers.
Advertisers, in turn, can bid on spaces based on their profile at a given time. For example, a company may bid more on a display because there is a traffic jam nearby (in which case the ad is likely to be seen by more people).
In another use case, a sunglasses company can bid on spaces with high luminosity.
For our demo, we decided to build a display that changes its content based on the temperature. When the temperature is cool, it shows an ad for sweaters. When it’s hot, it flips to an ad for frosty-cold soda.
Tuesday, 6 pm: Hardware Hacking
To help us debug and display the project, we whipped up an app in KPR that showed the temperature and a button for flipping the display.
Tuesday, 10 am: X-Acto Time
As the presentation drew nearer, we crafted the enclosure for the hardware and glued the example advertisements on two pieces of styrofoam which represented a billboard.
Wednesday, 3 pm: Pitching the Project
We got to see a lot of cool demos from the other participants. One team presented a mouth guard that keeps track of teeth grinding, and another group used their device to measure the water quality of the pool at the event venue.
During our live demo, our teammate John simulated the weather getting warmer by blowing hot air onto the sensor. Reading the temperature output on the Kinoma Create, the crowd went wild when John hit the threshold of 82 degrees and flipped the ad to the Coca-Cola side.
Wednesday, 4 pm: The Results
We were very happy that our peers enjoyed the demonstration, and honored to be awarded first place in the prize for the “Built Environment” category.
The Kinoma Create really shined as a presentation tool for the demo. Rather than a mess of breadboards and wires, the hardware was packed neatly into the Kinoma Create’s enclosure. We were able to keep working on the software until the very last seconds because the Kinoma Create looked polished already.
Another strength of the Kinoma Create was its ability to communicate with the audience during the demo. With the temperature of the sensor displayed in big font, the entire audience could tell what was going on inside the hardware. Without the screen, there would have been major dead air while we were waiting for the temperature to hit the threshold to switch the display.
In short: The Kinoma Create allowed us to make a complete, sophisticated prototype very rapidly, and proved ideal for demonstrating our prototype to others.
July 11, 2014
Thanks to our friends at O’Reilly, you now have access to video of Andy Carle’s entire talk from Solid: “Kicking Down Silos: Co-Designing Software & Hardware to Create Great Products.” Andy is Kinoma’s resident UX Strategist and Usability Scientist, and this talk comes straight out of his ongoing experience developing Kinoma Create.
By watching this top-rated talk, you’ll learn how to carve a clear path from concept, to prototype, to hardware product by:
- Preserving progress between prototypes
- Making user tests as authentic as possible
- Ensuring small jumps between prototype generations
Please enjoy and share!
If you want to review the presentation slides independent of the video, they are below (and downloadable in PDF).
O’Reilly Solid is a new annual event focused on the intersection of software and hardware, exploring how we’re all about to experience a profound transformation because of the creation of a software-enhanced, networked physical world.
This is a guest post by Ian Skerrett, who runs marketing activities for the Eclipse Foundation. He supports projects and member companies to increase the awareness of all the cool stuff happening at Eclipse.
Eclipse is a community for individuals and organizations who wish to collaborate on commercially-friendly open source software.
The Internet of Things (IoT) is the current ‘in thing’ for the technology industry. Vendors large and small are rushing in with products and solutions ranging from wearables, to connected cars, to industrial automation.
IoT is impacting a wide range of industries and will have lasting impact for years to come. However, to ensure this success, the IoT will need to embrace open standards and open source software.
Being Open Wins
The current IoT industry is characterised by a number of proprietary solutions from companies that might have an open API, but no chance of connecting or communicating with another proprietary solution. In essence, we have a number of solutions to build Intranets for Your Things. We need to do better, and an open approach is the way to go.
The IoT industry needs to learn from the history of the Internet: being open wins. We would not have the Internet we have today if Tim Berners-Lee decided to patent his inventions and start a VC-funded company to take on Compuserve or AOL, the giants of the day. The Internet runs on open source implementations (ex Linux and Apache http) and open standards. To succeed, IoT will need to do this, too.
For IoT to succeed, interoperability must be a given.
I advocate focusing on a core set of open building blocks and tools that will be used industry-wide, based on:
- Open standards
- Open source implementations of these standards
- Open source frameworks that make it easy for developers to build IoT solutions
No single company should control these building blocks and certainly no one company should profit from them. The building blocks need to be open for anyone to use, without having to ask for permission.
Developers are the Engines of Innovation
Developers will be the driving factor that compels the IoT industry toward an open approach, because they are the engines of innovation and adoption. To attract developers to a new technology, you need to have very low barriers to entry. Open source provides the perfect mechanism for engaging with developers and keeping barriers to adoption very low.
Openness for IoT is Underway
Companies and individuals are already building an open community for IoT. The Kinoma team inside Marvell has started the steps down an open road.
At Eclipse, we are building an open source community to provide some of the basic technology building blocks for IoT. Eclipse IoT has 15 different open source projects, including implementations of popular open IoT standards, MQTT, CoAP, and Lightweight M2M. We also provide open source frameworks for building IoT gateways, home automation solutions and SCADA solutions. The goal is to become the place for developers and companies to collaborate on building open source technology for IoT.
A Two-Year Migration
Over the next one to two years, expect to see the industry migrate to a more open approach. The current closed proprietary approach is too expensive and complicated for anyone to implement. History has demonstrated that open wins.
Open standards and open source must be part of the industry’s overall strategy to ensure that IoT truly succeeds.
Dr. Andy Carle is User Experience Architect at Kinoma. His PhD is in Computer Science with a focus on Human-Computer Interaction. Andy has a strong background in experimental design and qualitative research methodologies, and has been designing and running user studies for more than a decade.
Facebook’s study on emotional contagion
Facebook performed a massive online experimental intervention without obtaining proper informed consent. I am extraordinarily troubled by this recent news. The resulting paper is available for review. In short, the authors of the study wrote code to manipulate the Facebook news feeds viewed by 689,003 users over the course of one week in early 2012. For half of the users, the amount of posts with positive emotional content shown to them was intentionally reduced while the other half saw a reduction in posts with negative emotional content.
This manipulation has been called a study in “emotional contagion,” and it caused a small–but statistically significant–impact on the professed moods of the target users. The participants shown more positive posts that week posted more positive things themselves, while those shown negative posts posted more negatively.
The issue is a lack of informed consent
This study and its results are both useful and interesting. However, the manner in which the study was conducted was completely inappropriate, and raises serious questions about how Facebook is designing, approving, and executing experiments to be conducted on their massive user base.
The issue to focus on is informed consent: did the participants in this study know enough about it to make an informed decision to participate and was that consent properly obtained? The authors–whom I don’t know personally but are reasonably well known (and particularly well liked) in the social psychology and CHI communities–say that the answer is “yes.” They claim that Facebook’s terms of service and data use policy permit such an experiment. Indeed, there is no reasonable question that such an experiment is within Facebook’s legal rights.
But that is where I would draw the line: legal, but decidedly not ethical.
UC Berkeley’s definition of informed consent begins:
“A person’s voluntary agreement, based upon adequate knowledge and understanding of relevant information, to participate in research…”
This “adequate knowledge” is understood to include items that were not conveyed to the participants in this 2012 Facebook research: risks of participation in the study, potential benefits to humanity from the work being conducted, potential personal benefits of participation, alternatives to participation, etc. Without the research participant having been given adequate knowledge and understanding of the study they are participating in there can be no informed consent, and any assertion to the contrary is extremely suspect.
But was informed consent really necessary? Yes.
This opens a secondary question for consideration: was informed consent necessary for this study? The acceptable reasons for skipping informed consent are detailed, but in general are: 1) that the research involves no more than minimal risk to its participants and 2) that the research is merely a study of something that was happening anyway; that is, it involves no intervention on the part of the researchers.
No rational argument could propose that the study at hand meets either of these criteria. On the question of risk, I would suggest that the risks involved in seeing negative vs. positive news feed posts was precisely the construct being studied in this experiment. The authors had no idea what risks they were taking with their participants because they were trying to answer that question themselves. Therefore, consent was necessary.
It was a social psychology experiment
The second question is more interesting and contains more room for debate. Many people have quite rightly pointed out that a strict adherence to rules of informed consent would make A/B testing of user experience design decisions impossible at scale. This is where it becomes tricky. Facebook can (and certainly does) show different types of feeds to different users all the time in an effort to improve their product. And, indeed, I would grant considerable leeway here if this study were strictly analytical/descriptive in nature, especially as it appears that proper steps were taken to avoid disclosure of personal information. But this line of reasoning falls apart for this particular study if examined at any length.
What the researchers were looking at was not something that Facebook’s UX team would have been doing as a part of their normal business: rather, it was explicitly a social psychology experiment designed to determine an underlying fact about human nature, not merely to inform design decisions.
This was a research intervention in the classic sense and should be treated as such. Proper informed consent was not sought when the intervention was explicitly hypothesized to impact people’s emotional state. This is deeply disturbing.
No Institutional Review Board would have approved this research protocol as it ended up being executed. Researchers trained for working with human subjects must know better. And the ones who do not do this part of the job right damage the reputation and credibility of an entire profession.
How Facebook can correct course
It is perfectly understandable why Facebook has an in-house social psych research group interested in running these sorts of experiments. With a little oversight, there would be nothing wrong with doing so. If Facebook wants to correct course, they need to quickly establish oversight in a formal and transparent way:
- Facebook’s research efforts should be governed by an IRB composed of both internal and external individuals from the HCI/social psych world.
- Every potential study should be presented to this IRB for approval and decisions from the IRB should be binding.
- Approvals from this IRB should be publicly disclosed as quickly as is reasonably possible, given reasonable consideration for IP and design concerns.
This is the only way to save face on this debacle and ensure some reasonable sense of ethics going forward.
P.S. Some late additions as this story develops:
- The lead author on the paper has made a Facebook post responding to criticism of this study. In it, he notes that “While we’ve always considered what research we do carefully, we (not just me, several other researchers at Facebook) have been working on improving our internal review practices. The experiment in question was run in early 2012, and we have come a long way since then.” It is encouraging that this is an issue being taken seriously, but there needs to be dramatic transparency in these “improved” review practices to restore credibility. I remain troubled that they are not backing away from their claims that a generic Terms of Service agreement constitutes informed consent for social psychology intervention studies.
- Some media outlets are reporting that the IRB at Cornell University reviewed this study protocol before their faculty member officially became involved with the project. I don’t have enough information here to make a fully informed assessment of this claim, but here is what I think is being reported: I believe that Cornell’s IRB approved of their faculty member getting involved in the analysis of this data after it had already been collected by Facebook. It seems to have been approved on the exemption for pre-existing data sets, which means that the IRB did not make a judgement on the appropriateness of the methods used in the collection of the data. I’m going to guess that this is a decision that IRB would like to have back.