Sunday 23 May 2021

micro:bit music making

The year 6 Fleetville Code Club has kept going during all through lockdown – we spent a lot of time in Minecraft, which worked pretty well for virtual sessions, since we were all sharing the same virtual space.

This term we did something a bit different: the students have been making and programming MINI•MU musical gloves. This is a kit designed by Helen Leigh based on the MiMU Glove – in turn invented by the musician Imogen Heap and her team. You can get a glimpse of what the full MiMU Gloves can do in the following video:



The MINI•MU Gloves are considerably simpler than the ones in the video – instead of picking up all finger and hand movements, they use the movement sensors on a BBC micro:bit to detect basic hand motions, like how much you tilt your hand front to back and side to side. It’s still plenty of input to use for a musical instrument though!

We received the glove kits thanks to the OpenUK Digital Kids Camp and Huawei – who sent out 3000 kits and provided some cool e-zines which we used as a basis for our sessions. The students had to first cut the pattern from felt and sew their glove together, then connect the micro:bit, sound board and battery pack – before programming the micro:bit to make their own sounds.

This was a harder project to get to work in an all-remote environment, and I think some of the students found it hard to stay engaged without being able to share their creations in real life. Even so, I was really impressed with what they managed to build. Here’s my favourite wishing one of the student’s grandparents a happy birthday!

Taking it further…

The MINI•MU gloves make it easy to make a sound as they have a built-in speaker. However, the micro:bit can only make very simple beeps by itself.

I spent a bit of time trying to get closer to the full MiMU glove experience by sending the movement of the glove to something else that could make some richer sounds.

Scratch

One option is to connect the micro:bit to Scratch using its bluetooth connection. The Scratch site has very clear instructions on how to get the micro:bit connected – you need to install the Scratch Link software on macOS or Windows, or else use the Scratch app for ChromeOS or Android tablets; then install a special program on the micro:bit itself; and finally add the micro:bit extensions to Scratch and identify your micro:bit by the name it displays.

There’s a few tutorial programs to get you started on the Scratch site, and there’s a range of examples on the microbit site as well – including a theremin to make spooky sci-fi noises.

I’ve made another example that lets you make noises as you move either the mouse or the micro:bit around. You can have a go below:



Bluetooth MIDI

Another way to get better sound from the MINI•MU glove is to connect the micro:bit to a tablet or computer using MIDI over Bluetooth. MIDI is a music communication system that’s even older than the micro:bit’s BBC precursor from 1981, the venerable BBC Micro. You can use MIDI to describe starting and stopping a particular note, how hard the note is played (its velocity) and all sorts of other variations – and the system is supported by most music making programs, including things like GarageBand.

To get MIDI from your micro:bit to your computer or tablet is a little complex but once it’s working, you can make all kinds of sounds! Here’s some tips and tricks to get you going:

Build a project with the Bluetooth MIDI extension

To send Bluetooth MIDI from your micro:bit, you’ll need to add the Bluetooth MIDI extension:

  1. In MakeCode, click Advanced at the bottom of the blocks
  2. Scroll down to the bottom to see Extensions
  3. Click that and type “midi” into the search bar
  4. Make sure you click on the bluetooth-midi extension, not the plain midi one
  5. You’ll now have some extra light blue blocks in the Midi category

Here’s a ready-made project you can start with – Pentatonic MIDI controller

Tilt your micro:bit left and right to choose a note, then press the A button to make the micro:bit send the current note. There’s a secret feature to use the light levels picked up from the LEDs – can you figure out how to use it?

Make your micro:bit easier to connect

micro:bit MakeCode projects have a secret option that makes it easier to connect the micro:bit to something over bluetooth. It would probably make things harder if you had lots of devices in a classroom, but if you’re just wanting to connect a single micro:bit to your computer or tablet then this will make your life much easier!

  1. Go to project settings by clicking on the gearwheel at the top right of the page
  2. Turn on “No Pairing Required: Anyone can connect via Bluetooth”
  3. Click Save
  4. Re-download your project to the micro:bit

The Pentatonic MIDI Controller project above already has this option set for you!

micro:bit to iPad

This is fairly simple – in GarageBand for the iPad, go to Settings (the gearwheel icon), click Advanced, click Bluetooth MIDI Devices, tap the micro:bit then tap the Connect switch on. If the micro:bit is marked as offline, click Edit and Forget the device.

Now you can use your micro:bit to play and record tunes in GarageBand – and choose any instrument you want!

micro:bit to Mac

macOS won’t connect to the micro:bit by itself and needs a little bit of prompting. Download and run the Bluetility app to see the micro:bit, then click the last service and the last characteristic (they should start “E95D93AF-” and “E95DB84C-” respectively). You should then see a Subscribe button to click in the Detail pane – and this should then make the micro:bit appear in your Bluetooth system preferences.

Now that the micro:bit is connected over Bluetooth, you need to tell the macOS MIDI system that it needs to listen to it. Open the Audio MIDI Setup app (it lives in /Applications/Utilities – or just use Spotlight Search). Make sure the MIDI Studio window is showing (if not, go to the Window menu and choose Show MIDI Studio). Then click the Bluetooth symbol in the MIDI Studio window’s toolbar, which should open the Bluetooth Configuration window and list your connected micro:bit. Click the Connect button next to your micro:bit and you’re finally ready to play!

Open GarageBand, create a new project, and add a Software Instrument. GarageBand should pick up the connected micro:bit immediately and play the notes that you send. You can then choose different sounds from the Library to make your micro:bit play whatever instruments you want.

micro:bit to Windows 10

I haven’t tried this, but there’s a very clear video by BEATNVISION that suggests using the MIDIberry app to receive the Bluetooth MIDI from your micro:bit and play sound.

Use the micro:bit connectors

As well as the movement sensors, the micro:bit also has edge connectors that you can use to attach all sorts of other technology. The MINI•MU glove uses these to connect the speaker, but if you’re going to use bluetooth to output the sound, then you can use the connectors to get more input.

Here’s the micro:bit wizard David Whale with his micro:bit guitar:



And if you really want to see how far you can go, how about using muscle movements to control your music? Here’s a video on how to pick up electrical changes from your muscles using a micro:bit:



Keep making music!

Wednesday 22 June 2016

Bringing the real world into Minecraft

I’ve been running a Code Club at Fleetville Junior School in St Albans for the past four years and we have a whole load of fun programming and making things with computers.

About a year and a half ago, I introduced some Minecraft challenges to the club and soon discovered that it was the genie that can’t be put back into the bottle…

But what a genie! The students have been programming virtual turtles to build and dig their way through challenges, culminating in designing and building huge bridges across an endless ocean (see their videos). They can’t get enough of being in Minecraft — even if they’re not allowed to kill each other or blow things up!

For the follow-on project from the bridge building, I wanted to give the students the chance to create something without programming — so I asked them to build a model of their school in Minecraft. They loved the idea!

I wanted the students to build the model themselves, but they needed a guide to help them get things to scale. I figured that a flat map of the school buildings sitting in the Minecraft world would be a good place to start.

Google Maps has a good detailed view of the outside of the school, but this wouldn’t help with the interior rooms. Luckily the school had a PDF architectural plan of the school that they were happy to contribute.

I took a screenshot of the Google Earth aerial imagery of the school and its grounds and then combined the image with the architectural plan in a drawing program.

The next step was to convert the image into a Minecraft “schematic” file — this is a file that can be imported into a world using the WorldEdit mod. There’s a great little program called Spritecraft that does exactly this job. You’ll need the (paid for) Full version of Spritecraft to export as a schematic, but all the money goes to a children’s charity.

It took me a little while to adjust the image to give a good output in Spritecraft. First of all, I had to adjust the architectural plan to make it a bit more chunky — those fine lines just didn’t make it into the block-based world of Minecraft. Filling in the walls and removing the door symbols helped a lot, as did setting the windows to a contrasting colour (don’t forget that some students might be colour blind).

Secondly I had to choose a scale… I measured the outside of the school using Google Maps (right-click on a starting point and choose “Measure Distance” then left-click on the end point) and compared this to the number of pixels across the school in the image. Although Minecraft is all set up to use one block to one metre, using this default scale made the school corridors too narrow at just one block wide!

For my build, a scale of 1.5 blocks to 1 metre seemed to work better — the corridors were a couple of blocks wide and the building didn’t seem too large. This might be different for your build — so play around and see what works for you. Here's the result:

Actual image used to export schematic of school

Finally, I exported the schematic file from Spritecraft and copied it into the correct server directory (for MinecraftEdu that’s server/schematics but it may be different for your server). Then in a flat Minecraft world, I used the WorldEdit //schematic load <filename> and //paste commands to make the plan appear on the ground.

For our school there was an extra complication — the whole school is built on a slope, with steps and ramps along the corridors. To ensure the floor level ended up in the right place, I asked the students to start building at the lowest point of the school and then create the corridors (with their slopes and steps) before creating the classrooms.

This has been a really successful project so far. The students have made some really detailed rooms, complete with furniture, automated lighting systems(!) and non-player characters acting as teachers.

Here’s some pictures of their build so far — I hope to make another video before the end of term of the students giving a guided tour…

Fleetville School in Minecraft

If this has helped you and you’ve created something from the real world in Minecraft (especially another school), please comment below — I’d love to hear from you!

Thursday 17 March 2016

Mobile@Scale 2016

Last month Facebook invited mobile developers into their London offices for a collaborative discussion on scaling mobile development.

The focus was mainly on native development — and the attendees were mostly iOS and Android developers — but the scope expanded to include scaling development processes as well as how to scale apps for lots of users.

Jim Purbrick @JimPurbrick, Engineering Manager in Facebook London’s office, introduced the talks by saying that on mobile, the bug stakes are higher — once a bug is released, the app is on people’s phones and is much harder to fix. You can’t just update the code and see people get the fix in the next page refresh.

And for all the focus on ending up with a native app in the platform-specific app store, two of the big themes were sharing code across platforms and being able to make quick changes to apps that were already deployed.

I was impressed by the inclusivity of the conference — not only were the speakers from a variety of companies (not just Facebook or Facebook partners) but the audience were encouraged and given time to ask questions and discuss with the speakers.

The talks and discussions were all videoed and I’ve linked to them below together with my notes highlighting the points that made an impression on me.

Scaling iOS @ Google

Michele Aiello @micheleaiello, Tech Lead Manager on the Calendar app, Google

Michele gave a really detailed talk about how the iOS teams at Google deal with handling large amounts of code shared amongst many geographically spread developers. There’s lots of useful nuggets in here — and it’s interesting to see where Google have invested time and effort in order to make cross-platform and large team development easier.

Scaling iOS @ Google - Michele Aiello

Posted by At Scale on Monday, 28 March 2016
  • Google has more than 60 apps in the Apple App Store!
  • iOS devs have moved from London to every office worldwide
  • strategy on how to share code really important
  • design & ease of use crucial for scaling apps
  • yearly gathering of all mobile developers
    • often start shared efforts there
  • regular tech talks in local offices + recorded & streamed around the world
  • have feature specific “Tiger teams”
    • one goal: ship a feature
    • cross functional: Android, iOS, web, PM, UX, API, etc
  • have trouble with merging & reviewing xib, project & storyboard files
    • so Chrome team developed GYP: JSON for structure & dependencies
    • GYP: “generate your project”
    • use storyboard & xibs for prototyping, then shared code for elements
  • release management:
    • regular releases every 2-6 weeks
    • with feature flags to toggle new features
    • compile time flag during initial dev, then runtime flags for later
    • known as the release train
    • heavier-weight trains need synchronising with marketing etc — ok to be a little late
    • 75%-80% of users are using auto-update so automatically get updates
  • testing:
    • XCTests for functional and performance
    • KIF & Earl Grey for UI tests
  • sharing code
    • single repo
    • HEAD is always stable
    • all code is available and shared
    • making a change in shared code: see the test results and roll back if issues
  • for any shared code
    • enforce documentation & example code
    • catalog app for UI elements
  • for cross platform sharing
    • try to share model cross platform and to server too
    • makes offline easier
    • have tried webviews & javascript
    • now using C++ & J2ObjC
      • C++ easy on iOS, complicated on Android
      • used in Chrome
      • j2ObjC used for Inbox
    • j2ObjC even lets you debug into transpiled Java code in Xcode
      • breakpoints, stepping, variable values all work
    • if code is simple, look at sharing the tests rather than the code
  • swift at Google: currently playing with it
    • have found that development is faster
    • probably waiting a few months to bring into production apps
  • user testing using beta releases (testflight, android beta)
    • metrics in the app
    • surveys after testing
  • have tools to search whole repo to find out if code is still used
  • sharing layout
    • done using sharing layout logic

When mobile IDEs need to scale

Al Sutton @AlSutton, Facebook

Al talked about how Facebook builds Android apps, and how they feed back improvements to their build process into the open source community (e.g. IntelliJ community edition and the Buck build tool). By using Buck, they allow their developers to choose whatever IDE they want.

Nuclide

James Pearce @JamesPearce, Head of Open Source at Facebook

James continued from Al’s Android introduction to talk about Facebook’s new Nuclide IDE for building iOS apps… It’s exciting to see some competition in the iOS IDE world — whilst Xcode is great at some things, it often leaves a lot to be desired. JetBrains’ AppCode is a useful challenger but to have an extensible open-source IDE for iOS could be a game changer. The only downside for me is that Nuclide relies on Buck, so you have to change your project to buy in to the Facebook toolchain. Perhaps if someone could create a Fastlane plug-in…?

  • unlike IntelliJ, Xcode is not open source, so can’t contribute
  • existential issue for Facebook…
  • started extending Atom from github
  • aded Flow, Babel, Clang & Buck
  • created Nuclide
  • also added Chromium dev tools to help debug into app
    • lets you debug into Javascript, Objective C, etc all in same place
    • transpiling keeps source maps to help with line numbers
    • also lets you inspect into UI hierarchy for ReactNative apps
    • includes highlighting
  • now have 2/3rds of committing engineers using Nuclide
  • have analytics built-in
    • tracking feature usage
  • internal infrastructure team has become a product team
  • now available at http://nuclide.io
    • analytics kept for internal
  • other open source projects
    • pop: iOS animation library
  • doesn’t have refactoring yet

6 lessons learned scaling mobile at SoundCloud

Next up were a couple of sessions from smaller companies (though still not small!) showing how they built and adapted their apps faster to keep up with demand. SoundCloud spoke about using ReactNative (more on that later) and how they structured their dev teams to include mobile developers.

Jamie McDonald (Android) @jdamcd & Matej Balatic (iOS) @skavt, SoundCloud

  • building out new SoundCloud Pulse app for people creating sounds
  • most engineers busy on main SoundCloud listener app
  • got a partner for Android, but built iOS app with ReactNative using web developers
  • shared design & feature set across platforms saves a lot of time
    • were previously designing features twice
    • marketing was more complex too
  • mobile specific API
    • mobile-specific features: background sync, batch fetches
    • “back-end for front-end” idea from ThoughtWorks
  • developed C-based mobile playback library (skippy)
    • initially for Android, now rolled out across iOS too
    • e.g. optimise streaming for emerging markets
  • tried to spread mobile devs through feature teams
    • but spread too thinly
    • weren’t able to pair and share knowledge
  • instead created clusters of feature teams
    • mobile engineers shared amongst each cluster
    • could be in enough numbers together
  • release train model
    • each feature team can take responsibility for shipping
    • allowed action but also feedback and responsibility
    • use feature flags — team responsible for turning on when ready
  • tools used:
    • iOS:
      • FlipTheSwitch
      • stable CoreData stack — specific use of framework
    • Android, use LightCycle (soon to be open-sourced)
      • forward life cycle events to small independent modules
      • receives callbacks but doesn’t need to know which activity its attached to
      • enables better unit tests as can separate things out more effectively

Backend-driven native UIs

John Sundell @johnsundell, iOS Developer, Spotify

Spotify have an almost completely content-based app and are constantly tweaking to change the presentation and priority of different music. John and his team came up with a way of handling that change by controlling the whole app UI from the backend API.

  • define components in backend API
  • generalised data binding
  • generalised components
    • implement standardised components which can be picked up from API
  • can put cacheing, and lots of standard stuff in the generalised app
  • control the UI from the backend
    • API contains view models rather than raw models
  • Ed.: makes sense if you have an app with lots of similar components
    • especially for a content-based app
    • similar to Google’s code-based component library
  • were able to delete 20,000 lines of code on home page browse view
  • overall have been able to delete 100K lines of code across iOS & Android
  • use layout traits to control layout
    • e.g. full width, separator, stackable
  • request sends a lot of data about the device to the backend
    • can return different components & layout depending on device or screen size etc
    • sometimes send extra data in response so can handle quick changes e.g. screen rotation
  • support infinite scroll using metadata with URIs for follow-up pages
  • can set up fallback components — if this not available, fallback to previous
    • enables playing around with new features & UI but still supporting older builds

Infer: Moving fast with static analysis

Dulma Churchill, Software Engineer, Facebook

Taking up Jim Purbrick’s challenge of dealing with the higher stakes of bugs in mobile, Dulma gave us an introduction to Infer — Facebook’s static analyzer that can check for memory and resource leaks and null pointer issues each time you compile.

  • static analyzer that doesn’t require pre/post conditions
  • compositional, so doesn’t need to process whole project at once
  • very intertwined with compiler
  • infer can find inter-procedural bugs not local to single file
  • used with CI can be set up to only process newly compiled files
  • within facebook: fix rate around 70% in recent months
    • high rate due to getting results on continuous integration
  • there’s an Xcode plugin
  • integrated with codeboard
    • web-based IDE to teach programming in classroom
    • Java, Python Haskell…
  • see their blog post about being used at Spotify

3000 images per second

Henna Kermani @tokyotwilight, Software Developer, Twitter

Some interesting stats from Twitter here, in Henna’s story of how Twitter scaled up their image and video handling.

3000 images per second - Henna Kermani

Posted by At Scale on Monday, 28 March 2016
  • image uploading used to be all in the same API call as the tweet itself
    • any point of failure would fail whole thing
    • waste of bandwidth for client & server
  • split out image upload from tweet content
    • also allowed segmented, resumable uploads
    • used multi-part POST requests with separate INIT, APPEND & FINALIZE API calls
    • massive drop in upload failures, especially in developing world
  • did research on age of images:
    • 15 days 50th percentile
    • 150 days 90th percentile
  • so kept original + 20 days of variants
    • balance between storage increase per day and computation on each request
    • saved $6m in 2015 just from this change!
  • image formats
    • tried using WebP for 6 months last year in Android app
      • ~25% smaller than PNG or JPG — better engagement
      • but not supported on Android <4 or iOS…
    • converging on progressive JPEG instead
    • used Facebook’s Fresco library in Android app

React Native

Pieter De Baets @javache, Facebook

Pieter gave a detailed intro to React Native — building native apps for iOS and Android using just JavaScript and HTML-like markup.

  • if you ship a bug in a mobile app, there will always be a user out there running that bug — no matter how many updates you apply…
  • write UI declaratively, code in Javascript
  • share lots of code between iOS and Android
  • Apple’s guidelines don’t allow you to update code in a running app
    • but there’s an exception that lets you update JavaScript over the air
    • so you can update React Native apps instantly

Don’t forget the web

Jeremy Keith @adactio, Founder, Clearleft

After all that talk of native development, Jeremy brought us back to thinking about the web and how it will always be the largest, widest target. It isn’t a “platform” and it will never be the leading edge of mobile, but it is for everybody.

Don’t forget the web - Jeremy Keith

Posted by At Scale on Monday, 28 March 2016
  • when building for the web
    • start with core functionality
    • implement with simplest techology
    • enhance!
  • can be done for whole service but also for individual components
  • Ed. is this that much different from native?
    • especially for different OS levels, Android features etc
  • there’s always something new that’s not fully supported