SomethingUnreal
[MIDI] Mii-tan no Mahou de Pon!! (highly incomplete WIP)
updated
Well, I actually started making a lot more than 12, but most of them got abandoned, and these are all the ones that managed to survive my short attention span and make it all the way through to completion. If you just want to hear them, I've already uploaded videos of all of them.
I mention this in the video and on the download page, but these won't sound so good on non-Sound Canvas synths. I've made special versions of a few MIDIs, which are designed to work better on simpler synths (simplifying percussion and moving it all onto one channel, emulating slower attacks by using "expression" control changes, etc), which are included in the downloads where they exist, but it's very time-consuming and not always possible to even make it sound half-good, so I didn't do this for all of my MIDIs. They should sound fine on the newer SC-8850 too, and I'd be curious as to how they sound on an older Sound Canvas like the SC-55.
BaWaMI, my MIDI software synth, can sound acceptable playing most of them - at least if you spend a minute or two adjusting volumes and instruments. You can also use it to see what the MIDI Unfortunately, it would take a lot of work in a MIDI editor to get them to play well on a Yamaha XG synth, though.
You can grab the MIDIs from here:
http://somethingunreal.homeip.net/88pmidi
web.archive.org/web/20230606064954/http://robbi-985.homeip.net/88pmidi
You can also download BaWaMI from here, if you'd like:
http://somethingunreal.homeip.net/blog/?page_id=84#bawami
web.archive.org/web/20230606064951/http://robbi-985.homeip.net/blog/?page_id=84#bawami
web.archive.org/web/20230430225434/http://robbi-985.homeip.net/hosted_programs/update/bawami
I just hope that the people who wanted me to release these are still around to be able to have fun with them now...
This is a demo of a MIDI synth I'm developing for the Arduino. Its sound is currently very basic - it has no concept of different instruments, can only produce square waves and noise, and each MIDI channel can only be at one of 3 different volume levels. It has no fixed sample rate, and is always producing a new sample as quickly as possible, which is slower when more notes play at once (in practise, the sample rate ranges from about 20 KHz down to about 6 KHz).
It supports pitch-bends, modulation, monophonic/polyphonic MIDI channel mode, and some percussive notes. It also recognises some sysex messages, including GM/GS/XG "reset" messages and GS/XG messages to set a MIDI channel's percussion mode.
--- TO USE THE CODE YOURSELF (hardware info): ---
If you want the Arduino to accept MIDI data from "real" MIDI hardware (through a MIDI socket), you'll need to build a circuit with an optocoupler and connect that to the Arduino's serial RX port, "#define UseRealMIDIPort False" to "#define UseRealMIDIPort True" (this affects the baud rate used). Due to laziness, while testing, I used a program called "Hairless MIDI-Serial Bridge" and the virtual MIDI cable driver "MIDI Yoke" to send MIDI data straight over the Arduino's USB serial connection, instead of building the proper circuit.
The code controls one "port" on the Arduino (a group of 8 pins determined by the specific Arduino board model), which connects to an 8-bit DAC (a simple R-2R resistor ladder) to give an 8-bit audio output. I'm using port C on the Arduino Mega, because that neatly corresponds to digital pins 37 (LSB) to 30 (MSB), but it may work on other Arduino boards as long as there is a port where all 8 bits are mapped to digital pins, with minimal changes to the code. The output port (PORTAudio and DDRAudio) would need changing to one consisting of 8 usable pins, and the maximum number of playing notes at once (NumSoundChans) could either be reduced (will save CPU time and memory) or, in the case of the Arduino Due, increased.
You can see useful links and download my code for the current version from my blog:
http://robbi-985.homeip.net/blog/?p=1948
Alternative download link:
https://files.catbox.moe/a2aszy.7z
The MIDI is being played on MIDITester:
openmidiproject.osdn.jp/MIDITester_en.html
MIDI file being played:
http://robbi-985.homeip.net/hosted_programs/update/arduino/ss/Cutie_panther.mid
I did not make the MIDI, and I don't know who did. Please, people, at least credit yourself in the metadata ;_;
EDIT: Lmao, thanks for thinking that "#define" is a hashtag, YouTube. You'll never be Twitter, so please stop trying.
Her brain is now a Raspberry Pi instead of an Arduino, and she sees with an infrared camera (for better low-light performance) in greyscale, instead of just measuring the distance in front of her. This means she can now have a proper goal - instead of just moving towards walls and then turning, she can now drive along a path!
It uses a neural network to judge how quickly it should be driving and how to steer. Although she only sees at 128x64 resolution, this is a huge improvement! Currently, I'm still in the process of training her well (driving along paths with her recording the view and the controls that I'm giving her).
================================
EDIT: Since so many people have asked about the voice, I'll answer here instead of repeating it in the comments:
It's Acapela Rosie, a voice for TextAloud for Windows. I played around with a bunch of sentences and the pronunciation editor to get the right intonation and recorded around 200 small sounds from there of the voice. Dojikko (running Linux) stitches those sounds together to make sentences.
================================
In a future video, I will also go into details of the circuitry, including the way that the Raspberry Pi can hold its own power on and only turn it off once it's finished shutting down, because the only explanations for how to do this that I could find online required a ridiculous number of components and constantly leaked small amounts of power when turned off, which this way does not. Plus, this way only requires a relay, transistor and resistor.
- - - Please forgive the inverted colours of the subtitles! - - -
I only noticed this after I had subtitled the entire video, and there's no easy way to batch-change this in the video editor. I tried using a hex editor to find/replace the colours, but to no avail... orz
I _could_ pretend that it's a throw-back to the time when I used the colours this way, but it was actually a mistake.
This is 3 different recurrent neural networks (LSTM type) trying to find patterns in raw audio and reproduce them as well as they can. The networks are quite small considering the complexity of the data. I recorded 3 different vocal sessions as training data for the network, trying to get more impressive results out of the network each time. The audio is 8-bit and a low sample rate because sound files get very big very quickly, making the training of the network take a very long time. Well over 300 hours of training in total went into the experiments with my voice that led to this video.
The graphs are created from log files made during training, and show the progress that it was making leading up to immediately before the audio that you hear at every point in the video. Their scrolling speeds up at points where I only show a short sample of the sound, because I wanted to dedicated more time to the more impressive parts. I included a lot of information in the video itself where it's relevant (and at the end), especially details about each of the 3 neural networks at the beginning of each of the 3 sections, so please be sure to check that if you'd like more details.
I'm less happy with the results this time around than in my last RNN+voice video (youtube.com/watch?v=FsVSZpoUdSU), because I've experimented much less with my own voice than I have with higher-pitched voices from various games and haven't found the ideal combination of settings yet. That's because I don't really want to hear the sound of my own voice, but so many people commented on my old video that they wanted to hear a neural network trained on a male English voice, so here we are now! Also, learning from a low-pitched voice is not as easy as with a high-pitched voice, for reasons explained in the first part of the video (basically, the most fundamental patterns are longer with a low-pitched voice).
The neural network software is the open-source "torch-rnn" (github.com/jcjohnson/torch-rnn), although that is only designed to learn from plain text. Frankly, I'm still amazed at what a good job it does of learning from raw audio, with many overlapping patterns over longer timeframes than text. I made a program(*) that substitutes raw bytes in any file (e.g. audio) for valid UTF-8 text characters and torch-rnn happily learned from it. My program also substituted torch-rnn's generated text back into raw bytes to get audio again. I do not understand the mathematics and low-level algorithms that go make a neural network work, and I cannot program my own, so please check the code and .md files at torch-rnn's Github page for details. Also, torch-rnn is actually a more-efficient fork of an earlier software called char-rnn (github.com/karpathy/char-rnn), whose project page also has a lot of useful information.
I will probably soon release the program that I wrote to create the line graphs from CSV files. It can make images up to 16383 pixels wide/tall with customisable colours, from CSV files with hundreds of thousands of lines, in a few seconds. All free software I could find failed hideously at this (e.g. OpenOffice Calc took over a minute to refresh the screen with only a fraction of that many lines, during which time it stopped responding; the lines overlapped in an ugly way that meant you couldn't even see the average value; and "exporting" graphs is limited to pressing Print Screen, so you're limited to the width of your screen... really?).
(*)Here is the code rewritten from VB6 in a C++-like pseudocode:
http://robbi-985.homeip.net/information/bintoutf8_pseudo.txt
Also, here is an English explanation of the idea behind how it works:
http://robbi-985.homeip.net/information/bintoutf8_info.txt
EDIT: I have released my BinToUTF8 program to the public! Please have a look here:
http://robbi-985.homeip.net/blog/?p=1845
CrowdSound is a site where people were given a chord progression and song structure, and were then allowed to vote note-by-note to make a melody. It's an experiment to see if lots of people can work together to gradually make an entire song by voting on many tiny additions. Since people are making remixes already, I decided I'd try, too.
As of the 15th of August 2016, only the melody is complete, so I imported the MIDI of the melody (from crowdsound.net ) into Sekaiju (the MIDI editor I use). From there, based on the chord progression, I made tracks for bass, percussion, overdriven and acoustic guitar parts, 2-part pad and a portamento synth sequence to liven things up a bit. Then I decided on how I'd switch between the various backing parts so they weren't all fighting for the spotlight at the same time. After that, I changed the velocities of all the melody notes (since I'm using a velocity-sensitive lead instrument on Bawami), to make it sound less annoying and repetitive and to complement the beat. I also shortened some long notes (which is within CrowdSound's rules for arranging) to let the lead stop for breath every now and then, added modulation (vibrato) sparingly, and decided to somtimes pitch-bend from one note to another during the conclusion instead of instantly jumping (I think this should be allowed, because a real human voice would have to do this all the time =P).
In keeping with the openness of CrowdSound, you can download my MIDI (designed to be played on Bawami rev.132 or later) here:
http://robbi-985.homeip.net/F_s_t_v2_copy/Own/crowdsound-985.mid
It uses several GS "variation" instruments, so it will sound worse on GM synths. It also uses an instrument (12-string guitar) which is not present in Bawami rev.131, the currently-released version, but it should still sound fine on that version (it'll fall back to the "Acoustic Guitar (Steel)" instrument). That, along with many other changes, will be in the next version I release!
This MIDI is playing on BaWaMI, which is a freeware, retro-sounding MIDI synth that uses subtractive synthesis. I've been working on it every now and then since 2010.
You can find out more (and grab the latest version) here:
http://robbi-985.homeip.net/blog/?page_id=84#bawami
(Click its title to get to the download page)
The 3D scrolling view of notes is MIDITrail:
en.osdn.jp/projects/miditrail
EVERYTHING IN A MUCH EASIER-TO-READ LAYOUT (aka my blog):
http://robbi-985.homeip.net/blog/?p=1804
You might want to watch part 1 if you haven't already, so that this makes more sense: youtube.com/watch?v=CKYR8au2nfE
I had planned to screen-capture my program while recording but completely forgot to at the time, so please try to survive my camcorder pointing at my laptop screen...
Here, the PID controller is trying to keep the motor at a precise speed (and get it there as quickly as possible). It doesn't work well half the time because the L298 (H-bridge), responsible for switching power to the motor, doesn't seem to like making the motor brake. That means it speeds up much more quickly than it slows down, which the algorithm doesn't like (it's designed for linear systems) - it basically ends up trying too hard to slow down, resulting in a big undershoot. I might be able to somewhat compensate for that in code.
I might try this with a Sabertooth motor speed controller (as used in my old singing motors project) in place of the L298, which can certainly force a motor to stop spinning, but the Sabertooth gives such a boost to the motor to get it up to speed that 90% of the PID's job becomes redundant... Oh well, at least it'd be able to hit any given note without me having to calibrate it first like I did with the singing motors. By the way, that's why this system measures speed in Hz - I originally intended for it to play music like a new kind of "singing motor".
Originally, I planned to use a 3-pin computer fan instead of this motor, using the tachometer pin to measure the speed, but that required me to have a common ground for the motor and the tachometer, and I didn't have the right components available (I only had N-channel MOSFETs, but I needed a P-channel MOSFET). So I ended up throwing my own motor assembly together and using an N-channel MOSFET only (could only turn power on/off, not brake), which the PID system didn't like. I thought the L298 would fix that problem, since it'd allow the PID system to reverse power to the motor and brake it, but it turns out it's too weak to have much of an effect after all... =/
Part 2/2 will show it running at full speed (with a more powerful PSU), show a much more naïve speed controller algorithm for the lulz, and just clear up a couple of details.
This is a recurrent neural network (LSTM type) with 3 layers of 680 neurons each, trying to find patterns in audio and reproduce them as well as it can. It's not a particularly big network considering the complexity and size of the data, mostly due to computing constraints, which makes me even more impressed with what it managed to do.
The audio that the network was learning from is voice actress Kanematsu Yuka voicing Hinata from Pure Pure. I used 11025 Hz, 8-bit audio because sound files get big quickly, at least compared to text files - 10 minutes already runs to 6.29MB, while that much plain text would take weeks or months for a human to read.
UPDATE: By popular demand, I have uploaded a video where I did this with male English voice, too: youtube.com/watch?v=NG-LATBZNBs
I was using the program "torch-rnn" (github.com/jcjohnson/torch-rnn), which is actually designed to learn from and generate plain text. I wrote a program that converts any data into UTF-8 text and vice-versa, and to my excitement, torch-rnn happily processed that text as if there was nothing unusual. I did this because I don't know where to begin coding my own neural network program, but this workaround has some annoying restraints. E.g. torch-rnn doesn't like to output more than about 300KB of data, hence all generated sounds being only ~27 seconds long.
It took roughly 29 hours to train the network to ~35 epochs (74,000 iterations) and over 12 hours to generate the samples (output audio). These times are quite approximate as the same server was both training and sampling (from past network "checkpoints") at the same time, which slowed it down. Huge thanks go to Melan for letting me use his server for this fun project! Let's try a bigger network next time, if you can stand waiting an hour for 27 seconds of potentially-useless audio. xD
I feel that my target audience couldn't possibly get any smaller than it is right now...
EDIT: I have put some graphs of the training and validation losses on my blog for those who have asked what the losses were!
http://robbi-985.homeip.net/blog/?p=1760#settings
EDIT 2: I have been asked several times about my binary-to-UTF-8 program. The program basically substitutes any raw byte value for a valid UTF-8 encoding of a character. So after conversion, there'll be a maximum of 256 unique UTF-8 characters. I threw the program together in VB6, so it will only run on Windows. However, I rewrote all the important code in a C++-like pseudocode:
http://robbi-985.homeip.net/information/bintoutf8_pseudo.txt
Also, here is an English explanation of how my binary-to-UTF-8 program works:
http://robbi-985.homeip.net/information/bintoutf8_info.txt
EDIT 3: I have released my BinToUTF8 program to the public! Please have a look here:
http://robbi-985.homeip.net/blog/?p=1845
This time, I didn't throw away or dismantle everything in the video! I did thoroughly rearrange whatever remained afterwards, though.
For a start, it's actually in a rack-mount case (2U), with ~17TB total disk capacity and 20GB of RAM (it usually has about 24, but he had to remove some to use in another machine, hence the sticks of RAM lying on top of the case). It's running a few VMs for people (with Arch Linux as the host), acting as a NAS, and doing a few other things like running some IRC bots, but he shut it down and rebooted it so that I could hear the fans rev up. =D
The music I used is "Nature's Gasp" by Atmozfears & Devin Wild. Big thanks to Atmozfears for letting me use it here (now let's hope that YouTube's automatic song recognition doesn't punish me despite that...).
This test just naturally emerged after I played around with splitting tracks in a hardstyle mix for seamlessly playing on a CD. The trick to ensuring no silence between tracks was to split on CDDA frame boundaries (every 2352 bytes, which makes 75 per second for audio CDs). It took me some time before I realised that Audacity can measure position in CDDA frames and that I didn't have to convert the number of samples into CDDA frames myself every time...
I don't have a grudge against Foobar or anything - it really did get stuck in a loop of spinning up and down the CD the last time I tried it. Also, this may be my most anticlimactic and rushed ending ever.
This is a program I'm casually working on every now and then to print images on any 24-pin ESC/P2 dot matrix printer (ESC/P2 is Epson's control language for their dot matrix printers). It directly controls the printer by sending raw commands to it; you just need to tell Windows that it's a "Generic / Text Only" printer on Windows, not using the official Epson driver, and Windows will pass the commands straight on to the printer without trying to translate them.
This is a standalone program for printing image files, not a driver for printing from any program. I've not yet released it, but I intend to some time. Compared to the driver, it currently allows:
- Printing in (lower) resolutions for high speed (down to 60 DPI).
- Detailed control over colour dithering/thresholding.
- Very tall print-outs not restricted to a paper length (e.g. for continuous paper).
- Printing only individual component colour(s) of an image.
-* Faster colour printing by doing large blocks of each colour at once.
-* Multi-strike printing (optionally offsetting each one to fill in the gaps between the earlier ones' dots).
-* "Quiet" (multi-pass) printing (unfortunately, I can't control the actual speed).
*The last three are somewhat "hacks", abusing commands to try to force unofficial behaviour, and as such, they rarely work properly in combination with each other. In particular, the last two often don't work when printing colour.
By the way, printing in blocks of colour is no longer done by relying on sending commands with the correct timing (as it did in the previous video), which means it's now much more reliable and doesn't get messed-up by pausing the printer, image content, etc.
Previous video: youtube.com/watch?v=4EGSB-g2IfQ
The printer in this video is an Epson LQ-300+II.
Thankfully, the infrared light from my camcorder is apparently very clean (not pulsing), so I can use that to see things in the dark without affecting the sound.
The transformer is just designed to convert 230V AC to 12V DC, so its audio properties are not very good (it muffles things a lot). Ideally, I'd be using an audio transformer that's designed to sound good, but this is all I had available. I am using it to remove the DC current that the solar panel makes, because I don't fancy putting 17.5V into my Quad-Capture (sound interface)'s mic input. I originally tried to make a high-pass filter to remove the DC, using a capacitor and resistor, but it only worked until the capacitor became fully-charged, at which point the sound faded. It was much clearer-sounding than the transformer, but there was also a huge amount of background noise.
I want to revisit this idea in the future, especially to take it for a drive at night, listening to the street lights and car lights (since modern cars use PWM to dim the tail lights).
This MIDI took me about 5 days to make, plus a few hours of tweaking at the end. It sounded like it was nearly finished after 2 days, but the hardest stuff was still left to do at that point (I do hate struggling to transcribe barely-audible parts, but they really fill in the gaps and make it sound complete).
Firsts for me include the fast arpeggio effect (surprisingly, the 88P never complained about this), wah effect on the quiet guitar (on the right), and gratuitous use of "All Sound Off" whenever possible, to try to keep things running quickly enough. Also, 4 sound effects I've never used before!
3 channels get re-used for different instruments (16 channels has never been so insufficient), but more annoyingly, the synth's update speed drops really low during the chorus because of all the playing voices, making pitch-bends sound jumpy, and it took a lot of tweaking and quite some luck to get a clean recording. I kind of wonder if it's just my 88P which slows down so much when many notes are playing (even if there are not many MIDI messsages), or if it's simply a limitation of its CPU speed. It might just be a coincidence, but it seemed to handle it better immediately after power-up, so perhaps it becomes worse as it gets hotter. In that case, maybe I could attach a heat sink to the CPU, or just put a fan in there (I don't really want to drill holes, though). The case doesn't really get very hot, though. I kind of wish I could limit it to playing only 32 voices at once, instead of letting it struggle with 64. Lowering release times only gets you so far.
You can download all of my SC-88Pro MIDIs here:
http://somethingunreal.homeip.net/88pmidi
Somehow, the fact that an anime of Moetan had been made eluded me for 8 years and I only recently discovered it. "YOU MAGGOTS ARE HUFFING AND PUFFING--" oh wait, wrong English-teaching mahou shoujo.
This is a remake of the music from the DOS version of the game "Dizzy: Prince of the Yolkfolk". It was released on many different platforms, and I'm sure the instruments vary a lot on the different versions, but I've only ever played the DOS version. It used AdLib music (technically, the Yamaha OPL2 FM synth).
I've seen some remixes of this, but I tried to stay faithful to the original (including overlapping notes, monophonic channels and notes that get cut because of the limitations of the AdLib hardware). Originally, I was going to use a synth trumpet sound to really match the original, but I couldn't find a suitable FM-style harpsichord to go along with it, and synth trumpets sounded stupidly out-of-place when paired with the realistic harpsichord, so I had to go for modern-sounding brass instruments.
Yamaha music on a Roland synth... heresy!
You can download all of my SC-88Pro MIDIs here:
http://somethingunreal.homeip.net/88pmidi
P.S. You can hear the original here:
youtube.com/watch?v=puMQKaTc51k
Headphones are recommended for the noise reduction demo.
The waveforms displayed while Dolby is selected are simulations - versions of the original audio processed by a compressor/expander. In reality, Dolby mostly only affects the treble (as you can hear), since that's where most of the tape hiss is, so you wouldn't get big low-frequency waves, but hey, it's easier to visualise this way.
My usual video editor did not feel like working, so I made this entire thing in Avisynth - it's like coding, but for video. Please kill me now. Well, I used VirtualDub for the cropping out of all the dead air, my failures to speak, and misinformation which I said by mistake. On the plus side, I learned a lot of stuff by editing this way (such as how badly Avisynth is designed regarding modifying audio). Also, I didn't have to screen-capture anything to get those waveforms - they're being made in realtime by the Waveform plugin, which is processing audio from different Dolby simulations which are also being made in realtime by SoxFilter's "compand" function.
This video didn't turn out as well as I would've liked - from the poor view of the mechanism to the incomplete demonstration of the functions and different tape types, and the fact that the only time I could record this (free of disturbances and noise) was a 45-minute slot was when I was half-asleep. So, if people are interested, I might re-visit this.
With all the old technologies on my channel such as a dot matrix printer, FM MIDI synth, SC-88Pro and now a cassette recorder, perhaps I should rename my channel to "SomethingRetro". There's an old VCR here just waiting to be opened, too...
The Arduino fires all 3 ultrasound sensors and listens for their echoes at the same time in order to "see" at the highest frame rate possible (typically 20-50 FPS), but this causes issues with echoes from one sensor bouncing around and returning to a different sensor. Although the code avoids any clearly-bad echoes like this (e.g. 2 echoes on the same sensor), it's far from perfect, and she often thinks that she's crashed into something (an object is very close to the head) when she hasn't. I think there's also a strange bug in the function that times the echo delays, or something strange is going on with hardware interrupts, because the function sometimes returns 0 for a sensor which clearly has an object in range, and at the same time, the sound of the tone playing on the speaker (using the built-in tone() function) becomes distorted so that it doesn't even sound like a square wave anymore. I've never experienced that before, and I have no idea what's wrong there. Oh well, she looks cool aligning herself half of the time.
Fun game to play: See how many inconsistencies there are in this video. It's a combination of videos recorded 7 months apart.
It should be better when I at least have a stable and level tray for the paper (or whatever) to sit on. I have an idea for an alternative to a heavy sheet of steel, which you should be able to see in the next video. Perhaps the PSU fan will have arrived by then, too...
Also, enjoy the in-sync 50 FPS if you can! That pen flicks back and forth at stupid speeds, so a high frame rate is actually useful here.
It's pretty much ready to print - just a little hot glue and a sheet of paper and it's all set! But unfortunately, this video is already ~8 minutes long, so that'll have to wait.
After trying out solenoids and ruling out floppy drive motors because of speed, I looked at servos as a way of moving the pen up and down. I settled on Hitec's second-fastest micro servo, which is digital (means it's not limited to 50 Hz update intervals) and has metal gears (means it won't destroy itself quickly). It's designed for use in R/C helicopters, so I'm hoping this will handle the fast motions over a small range of travel, with a light load, well. I'm certainly impressed by it so far.
The top-left is the 88P's display, the text below it is every MIDI message (displayed by Bawami, my own MIDI soft-synth), and the background is a view of most of the notes (top part - unfortunately, I couldn't capture the view of all 128 MIDI notes) and all control changes (bottom part). I'm trying something a little different this time - a smoothly-scrolling piano roll made possible by Sekaiju (the MIDI editor program)'s ability to print the pianoroll view. I used a virtual printer driver (PDFCreator) to print the pianoroll to a series of images, batch-crop (FSViewer) and stitch them together (IrfanView), and then simply pan it at the right speed on the video editor.
This is my new biggest MIDI yet, both in duration and file size (138 KB) thanks to those thousands of expression changes to get the reversed piano effect (simply changing the piano's attack time did not sound good).
I am working on a dedicated page on my web site for hosting all MIDIs I've made for the SC-88 Pro synth and any modified, "general-purpose" versions which I made to make it sound at least acceptable on other synths. Please be aware that it is often very tedious (and sometimes, downright impossible) for me to make simplified versions which sound good, so I don't intend to make them often. The page will also show the various features of the synth used by each MIDI, for any curious people like myself who find that stuff interesting. It'll also act as a warning for how bad the MIDI will sound if played on a different synth, since some of my MIDIs are basically built around the 88P.
EDIT: It's done! You can download all of my SC-88Pro MIDIs here:
http://somethingunreal.homeip.net/88pmidi
I have a lot of MIDIs started but very little motivation to finish them recently. Among them are "Verge" (Shimamiya Eiko), "Borderland" (Kawada Mami), "Planeptune's Theme ver. Re;Birth" (Neptunia), "I'm Not Okay" (My Chemical Romance), "You Are Alive" (Fragma), fragments of various hardstyle songs, and the first one I ever started making for the 88P in 2012: "AirFort-JP Hardcore mix" (Minamotoya). I really have to finish at least some of these some time.
If you like this song, please check GOP's other stuff:
soundcloud.com/ghostsofparaguay
This is what happens when I don't plan everything through before I start making something (i.e. all the time). Sorry. Stuff will actually happen in the next one, I promise!
The printer in this video is an Epson LQ-300+II.
My program sends raw ESC/P2 data (Epson's printer control codes) to any printer port that you have installed (including USB-to-printer adaptors), with no need for the Epson driver. It bypasses the page limit length enforced by the driver, provides detailed dithering options (including error diffusion, used in this video), and takes a different approach to printing colours. This approach is designed to be much faster than colour printing using the Epson driver, but my program has to fight against the printer's urge to merge everything internally and print all 4 colours slowwwwwly on every line. It seems to all depend on timing - wait a moment so that it starts printing - which I'm very disappointed in, because different printers will print at different speeds. This means it'll be hard to make a program that works well with any ESC/P2-compatible printer. It will at least end up with the correct ink on the paper - it just might take 10 minutes to print. Oh well~
EDIT: Newer progress: youtube.com/watch?v=6gyzlScdiTQ
The headphone output is connected directly to the coil in the clock movement and driven with a square wave. The coil is 250 ohms - some headphones are much lower than this, so there's no need to worry about damaging the laptop's headphone output, either! The gears, on the other hand, may wear out a little quicker than usual. =P
There's still no pen, so I stuck an LED where the pen will go and made it turn on when the pen should be drawing, as a test. Then, I took a long-exposure photo while it "printed" with the LED, pointing the camera upwards slightly after it finished each row. What I ended up with was a photo of the image that it tried to print, with inverted colours and stretched a bit because I didn't move the camera at the right speed.
It can also now print bidirectionally, and it's much faster to receive the data for the next row of pixels, because they're no longer sent one at a time.
Illustrations are by とんぐ (Tongu) (Pixiv member ID: 258901) and CAFFEIN (blog: http://caffein89.blogspot.co.uk / Pixiv member ID: 13054).
Although it still doesn't look like a printer, I've been working on the software. Here, you can see the first stages of having the printer be controlled by the computer. The image data is actually being sent, but very slowly (think of it as a "compatibility mode") - I wanted to make sure I had two-way communication working perfectly before making things faster. As such, there are still debugging messages being displayed on the Arduino's LCD, too, left over from me trying to get things to work.
The next stage will be to prove that the Arduino is really receiving the image data correctly, even though there's no mechanism to move a pen yet!
Illustration is by CAFFEIN (blog: http://caffein89.blogspot.co.uk / Pixiv member ID: 13054).
P.S. This video editor is bloody awful.
Sorry for being really lazy about uploading this. Also, the upload itself finished a little quicker than I thought it would, so yay, the date at the end is in the future.
The original song has some heavy dynamic range compression so it sounds like this MIDI is lacking somewhat if you compare them, but I couldn't actually make out any more parts than this in the original. The 88Pro has a compressor effect available, but it can only use one effect at a time, and I was already using distortion. The high-pitched sound effects on the right speaker should be faster, but the synth can't reliably update the pitch much faster than this. There was an unexpectedly tricky selection of percussive sounds and panning needed for the toms, so I ended up using 4 separate percussion channels...
You can download all of my SC-88Pro MIDIs here:
http://somethingunreal.homeip.net/88pmidi
Thanks to code samples by a Karl Peterson, I now know how to send raw data to any printer, so I have no excuse to not make/modify/release some printer-related programs for the public. Oh no!
This script language I made also allows if/then decisions and setting values of variables, but I haven't fully finished coding support for those yet. Unfortunately, I have no imagination at all regarding thinking up storylines, so this is all I could do for a video.
The printer is an Epson LQ-300+II, running on Windows 7 using the "Generic / Text Only" driver. This driver is the key to being able to send raw text and commands to the printer.
The program can also monitor plain-text log files and print new lines of text as they are saved. Here, it prints off live chat logs from 4 IRC channels. I optimised it a little for IRC logs, so that it can print different parts in different colours. I considered making it monitor my web server's log file, but visiting certain pages on my site can add dozens of lines at once to the log file, which would waste a whole sheet of paper in seconds.
My program uses a USB interface, which I made with an Arduino, to communicate with the printer. The Arduino passes printer status info to the laptop, such as "error" or "paper out", and forwards data from the laptop to the printer's parallel port if the printer is ready.
I'm using a different, brand-new printer this time because it turns out that the other, second-hand printer had a bad head with 2 or 3 dead pins, causing blank lines in the print-out. I actually recorded several videos on the progress of cleaning the printer and its head, and was able to fix one of the pins, but 1 or 2 never came back to life, so it'd make a bit of an anticlimactic video that I might not upload. By the way, I recorded the pins firing in slow-motion, and one was very slow while the other didn't move at all. I uploaded the video here: youtube.com/watch?v=zk3iJJ9PpDY
PREVIOUS PART - HARDWARE:
youtube.com/watch?v=PfhKc0gSNIw
This interface (Arduino) goes between the printer and the laptop, appearing as a serial port to a program of mine which will be running on the laptop. In this video, for testing that I can communicate with the printer, the Arduino itself is sending the data, instead of my laptop. In the next video, the Arduino will be playing a simpler role and mainly just forwarding data from my laptop to the printer.
NEXT PART - SOFTWARE:
youtube.com/watch?v=oG4cm2Ay0GQ
This is a MIDI I made for the Roland SC-88Pro of the opening theme to Kiss×sis OAD, originally sung by Taketatsu Ayana and Tatsumi Yuiko, written/arranged by Takahashi Nana. Despite being pretty complex and having a part that was incredibly hard to decipher beneath all the other instruments (I could only clearly make out half of one bar, and had to estimate what the rest was), this one only took me 5-6 days.
I decided to screenshot my MIDI player BaWaMI, chop it in half and re-arrange it into one long row of 16 MIDI channels at the bottom. I think this makes better use of the 16:9 video frame. Fun fact: Early versions of Bawami had the channels arranged like that, when they were narrower and didn't have those blue bars.
You can download all of my SC-88Pro MIDIs here:
http://somethingunreal.homeip.net/88pmidi
I wonder if this synth is older than Ako or Riko.
Blog post with details and download link:
http://somethingunreal.homeip.net/blog/?p=1352
The Microsoft VB6 Runtime installer is included in the download. You only need to use it if my program fails to start. Window 7 and later operating systems come with it pre-installed.
The MIDI editor I use is the brilliant freeware Sekaiju (version 3.8, in this video):
http://openmidiproject.sourceforge.jp/Sekaiju_en.html
Never call a song simple until you've tried to make a MIDI of it. And trying to remake the inaccurate timings of the toms took a silly amount of effort. That said, this MIDI only took 2 days to make, and a third of that was spent on the animations. I ended up making a program to draw on which spits out the SysEx messages to control the synth's LCD, because I didn't fancy wearing out half of the buttons on the synth by drawing pictures on it (yes, the synth has a drawing mode - I guess Roland had some spare ROM to play with). Manually putting the commands to buffer and display every frame at the right time (there are 48 different frames) into the MIDI file was tedious.
I have released the program which generates SysEx messages to control the Sound Canvas's LCD! If you're interested, please see:
youtube.com/watch?v=yZLGj-4fWO4
You can download all of my SC-88Pro MIDIs here:
http://somethingunreal.homeip.net/88pmidi
I got to try something I've never done before - a "compressed" effect on the percussion to make it sound more powerful, by turning down the cymbals when the kick and other bassy percussion plays. Because all those curves spam up the view of control changes at the bottom of the screen, I hid those ones at times in this video.
My brother's music channel is:
youtube.com/user/ArcadiaMetalUK
He was kind enough to give me access to the individual layers in his track, so I didn't have to struggle to hear notes hidden underneath other notes, which is what I normally have to struggle with when making MIDIs. He even gave me the original drum MIDI track, but that meant I had to set up my own user drum kit for the SC-88Pro that was compatible with the drums software he uses. The fact that the synth only has 2 discrete overdrive channels means that it sounds a little awkward during the transitions to/from the guitar solo. Plus, solos are always hard to transcribe anyway.
All in all, it was a fun challenge! I ended up only using 8 MIDI channels (3 for percussion), which is the least ever in a MIDI I've made for this synth. I usually end up using at least 15 out of 16. As for the video itself, there are several fails because my old laptop died and I've clearly not got stuff set up correctly on my new one yet.
You can download all of my SC-88Pro MIDIs here:
http://somethingunreal.homeip.net/88pmidi
(Thanks very much for the project files, bro! I wouldn't have managed this without them.)
Here's how it sounds when played correctly:
youtube.com/watch?v=s7pTnsJ67iw
I decided to record how my "JUMPING!!" MIDI sounds when played in this state, too:
youtube.com/watch?v=As4HaqptpCA
The synth is the Roland SC-88Pro, and the interface is a Roland Quad-Capture. I'm having to use an old driver (1.0.1) because the latest one is incompatible with a bad USB3 driver for my laptop. The bug might be fixed in later versions. The MIDI sequencer software is the wonderful Sekaiju:
http://openmidiproject.sourceforge.jp/Sekaiju_en.html
A later version can be seen here:
http://www.youtube.com/watch?v=l6IlxuYVK9o
She's progressed a little further still, now - I should upload a video of her outside soon!
Music by Kevin MacLeod (incompetech.com):
"The Way Out"
"Dance Monster"
I started this project almost a year ago, but I'm only uploading the videos now that I'm sure it actually gets somewhere and that I didn't just abandon it like I do with most of my projects.
The rest of the videos probably won't feature so much editing work - I just needed to do that in this video due to my explanations that would be too hard to follow if I just kept my speech and video of my hand moving enthusiastically in front of the camera.
I should've made a compilation of every time I said "Basically" at the end, basically.
I'd love to see this with a proper high-speed camera.
Full version is here: youtube.com/watch?v=kptqzXlMnjs
This was surprisingly easy to make, despite having a few things that I've not used much before (delay echoes, changing of resonance over time, reversed kick). It took 6 days in total, but the majority of the work was done in the first 4 days.
You can download all of my SC-88Pro MIDIs here:
http://somethingunreal.homeip.net/88pmidi
These Superex Pro-B VI studio headphones from the 1970s belong to my grandpa, and sound amazing, but only when they work. Which, unfortunately, was not very often. I looked around online, and it's apparently fairly common for the capacitors to die with age. So I replaced each (0.0025 uF) with the closest value I could get hold of (0.0022 uF).
Let-down noticed after a few hours: It didn't fix the problem. But at least that's two components that should last longer now. But if it's not the capacitors then it may well be the custom-wound transformers in each ear, which are obviously not made anymore. Oh well, it gave me an excuse to make a casual camcorder/chatty video for the first time in a while.
Bawami, my MIDI player, is playing the MIDI just so that you can see what's going on, and it's sending the MIDI messages to the SC-88Pro synth, whose display is shown in the top-right.
This was roughly as hard to make as "Mii-tan no Mahou de Pon!!" (perhaps there's something about songs that have two exclamation marks in the title). I put a lot of effort into making the guitars as accurate as I could, so here's a version with only the GUITARS, BASS and DRUMS playing:
youtube.com/watch?v=p-KIekYxr-c
I started working on this 4 months ago, then lost motivation, and then continued and finished it over the last week. Also, holy crap, I'm going to take YouTube's matching of third-party content at 0:17 as a complement of how accurate the trumpet part is.
You can download all of my SC-88Pro MIDIs here:
http://somethingunreal.homeip.net/88pmidi
BaWaMI is available for download here:
http://somethingunreal.homeip.net/blog?page_id=84#bawami
I decided to do away with the "loli voice" effect that I used in the incomplete version (youtube.com/watch?v=wsQjKQw_nPs), because the effect broke whenever there was a strong pitch-bend.
The 3D visualisation is MIDITrail, scrolling text is BaWaMI's MIDI message view (my MIDI player's interpretation of them), and in the top-right is the Sound Canvas synth's LCD itself.
Production time took about 4 days, including time I couldn't do anything because I was at work or dead from this cold that I have. This is, without a doubt, the new most-complicated MIDI I've ever made, with many SysEx messages throughout changing the routing to the Insert FX and its parameters, along with tons of control changes, some instrument changes (16 just isn't enough), a user-defined drumkit, 3 percussion channels, and a healthy dose of pitch-bends (with varying sensitivity). My "Suwa Foughten Field" MIDI remake had some of these, but I think this sounds better, and certainly posed more challenges.
Blog post with some links:
http://somethingunreal.homeip.net/blog/?p=1045
Version at Niconico:
http://www.nicovideo.jp/watch/sm22915969
You can download all of my SC-88Pro MIDIs here:
http://somethingunreal.homeip.net/88pmidi
This is my brother's cat. She's about 2 years old.
This was my first time using both the Roland SC-88Pro and Korg Radias synths together. The Radias is playing most percussion and a high-pitched lead, and also processing a guitar sound from the SC, while the SC plays everything else (bass, pads, electric piano, lead synth, lead guitar, cymbals, etc). The 2 synths seem to work well together.
What genre could this be? I wasn't aiming for anything, so I'm not sure. Hard dance, perhaps?
MP3 (320kbps): http://www.mediafire.com/?vfw5s2sgud2dw9b
Niconico for scrolling comments: http://www.nicovideo.jp/watch/sm21264063
It's been 3 months since my last video. Wow.
I recorded the sound at a very high sample rate (192 kHz - at this rate, the microphones can hear frequencies up to 96 KHz). Afterwards, I slowed down the sound to a quarter of the original speed, 1/8th, 1/16th, etc, even down to 1/128th (at that speed, just 2 seconds would be over 4 minutes long). I also recorded video in slow-motion, but my camcorder can only go up to 200 FPS (1/8th speed when played back at the usual 25 FPS), meaning that lower speeds such as 1/64th aren't smooth, as I had to futher slow down the video in editing. But the main point of this video is the sound.
This pushes the MIDI bandwidth quite far in places, and as a result, the timing of the notes is slightly off in those places (mainly when many notes need to start at the same time).
Video at Nicovideo for those who prefer to leave scrolling comments:
http://www.nicovideo.jp/watch/sm20339204
You can download all of my SC-88Pro MIDIs here:
http://somethingunreal.homeip.net/88pmidi
I used the freeware MIDI sequencer "Sekaiju" (3.2) by Kuzu:
http://openmidiproject.sourceforge.jp/Sekaiju_en.html
I also used GS Advanced Editor for setting up the Sound Canvas synth's various parameters such as custom reverb and delay, envelope settings, percussion settings, etc
This video is both to show off some new things in my software, while letting you hear how it performs with this particular MIDI file. This is the biggest update since I first released Bawami to the public, and I changed even some smaller things such as the character of the reverb (there's now damping, so that hihats and treble such as the shakers in this MIDI don't sound painfully loud anymore).
Download and full list of changes I made in this version (over 30!):
http://somethingunreal.homeip.net/blog/?p=847
Bawami's usual page:
http://somethingunreal.homeip.net/blog/?page_id=84#bawami
MIDI used in this demo:
http://somethingunreal.homeip.net/F_s_t_v2_copy/Clannad%20OST/kmtcla11_88p.mid