Let me tell you a story about the Moonlite App. An app that turns your mobile phone into a storytime projector and works seamlessly with the hardware extension, providing modern parents to tell stories like never before. When we first encountered with the Moonlite idea, to be honest, it sounded a bit like a fairytale. But in a right way. Let us explain how we hit the Bullseye.
Moonlite started off as a crowdfunding idea and managed to raise over $350.000 on Kickstarter and even more on Indiegogo. Five got the chance to bring this marvelous idea to life.
This time, it all started with the hardware. We got a clip projector, engineered with an idea to work with an app on the mobile phone. It wasn’t the first time we worked on the app that worked in pair with a piece of hardware. We knew what we were doing, but still, knew that we would have to overcome some severe challenges if we wanted to enable a device aiming to revolutionize the storytelling process.
Brainstorming And User Testing
So, we rolled up our sleeves and started brainstorming. The first goal was to explain to a user how the projector works, and secondly, to help the user with the usage of the projector.
As always, our team had a wide range of ideas. Should we go with the simple onboarding? Should we go with an animated onboarding? Maybe a video tutorial instead?
The Real Challenge
After a round of user testing, we got answers to some of our questions. Most of the users seemed to get a hold of the projector after a few tries, but many of them had problems with the process of attaching the projector. Users were turning their phones around, upside down. It was clumsy and troublesome.
And then it hit us. Why don’t we give our users a target on the phone screen to indicate where a projector clip should be? So we arranged another user testing. This time, we presented the users a simple target for the projector clip. It worked like a charm.
“Users were turning their phones around, upside down. It was clumsy and troublesome.”
Target Design: The Concept of The Bullseye
We made a final decision, and created an animated onboarding with the help of an on-screen target, to guide users with the projector. We called it The Bullseye, and it looked like this:
Users could now try out the projector as they are advancing through the onboarding.
The users loved The Bullseye, and it looked great, but we had another problem: how on Earth should we implement a thing like that?
As an engineer, one of the first challenges I thought of, was how to get the screen coordinates for a wide variety of (primarily) Android devices out there? Screen coordinates are different for every android hardware model because coordinates depend on the position of the flashlight on the device.
Furthermore, we wanted to be able to update coordinates on the fly, without the need to update the app in the store.
For that, we needed back-end support. A device should fetch proper coordinates and display The Bullseye.
This means that we must map some device information to the coordinates on the back-end. After some research and testing, we found that a device model is best suited to our needs. Luckily, Android exposes API to get the model of a device user is using easily – Build.MODEL
Saving coordinate values seemed straightforward, just take the pixel values of the bullseye and you are done. But did you know that some Android devices can change the screen resolution? A logical approach to handle changing screen resolutions would be to store coordinates as a percentage of the width and height of the screen. So, we did just that.
All we had to do now is to take out the ruler, and measure every possible device, determine bullseye coordinates, and store them on the server. Right?
Wrong. We are developers, and we work with code, not rulers.
There’s an App For That
So you’re telling me there’s an app that replaces rulers and does all the measuring of our coordinates in pixels? Of course not. And so we made one.
Workflow is simple, we attach the projector to a device, adjust it so that projected image is just as we like it. We move on-screen bullseye to match physical clip and hit the Send/Save button to send coordinates to our back-end.
Devices with the same model identifier can immediately use these new coordinates and show a bullseye to help users with the projector alignment.
There is one thing that we learned the hard way though, devices that are physically the same can have different model numbers that we get with Build.MODEL
Here is a handy list of google play supported devices and model value they will return.
When coordinates have to be created or updated from the bullseye app we search for a group of model identifiers where the given model is, and update coordinates for the whole group.
To Sum It Up
Building a seamless interaction between your app and its hardware extension is not easy nor straightforward. But it is definitely worth it.
Tools that help you out along the way can mean a lot. Don’t hesitate to create your own tools, you will learn a lot making them and probably solve your challenges faster.
Sometimes, the tools you create can become a part of the app experience themselves. The next plan for the handy bullseye app that we are looking into is to incorporate it into the app.
Coordinates for the devices that we did not collect could be given to us by the users.
It helps us, the user, and all of the other users that use the app.
Don’t hesitate to create your own tools, you will learn a lot making them and probably solve your challenges faster.
What do you think about these challenges and our solutions? Would you choose a different approach?