This blog is my way of documenting a process I went through to build a gateway that would allow my Apple IIGS to have access to an area of my fileserver (an old 2008 Mac Pro running macOS El-Capitan) using AppleTalk.
I used to have all this working on an uBuntu file server but I recently seized the opportunity to swap it for the silent running Mac Pro that has been collecting dust in my office.
Sadly, some time in the distant past, Apple chose to remove support for AppleTalk from macOS, so you can’t just hook up your Apple IIGS to the network (via the wonderful Uthernet II) and ask GS/OS to connect (using AFPBridge these days) to the Mac like we used to back in the day.
When I was using the uBuntu box, I’d used a package called A2Server to install a version of Netatalk that would allow the fileserver to pretend to be an old Mac supporting AppleTalk over TCP. This worked very well.
There are other flavours of A2Server however they tend to rely on sharing contents of a filesystem (or part thereof) on the local machine. I tried several without much success.
What I wanted was to share the contents of a folder on the Mac Pro to the IIGS. Since I couldn’t do this directly, with some suggestions from the community, I decided to use a Raspberry Pi as a gateway. The Apple IIGS would connect to the Raspberry Pi, and that in turn would connect to the Mac Pro. For the Apple IIGS, it would look like it was seeing files on the Mac Pro.
Connecting the Apple IIGS to the Raspberry Pi
I couldn’t get A2Server to work in this way. A2Server won’t install on the latest version of Raspian because the version of Netatalk it brings in, no longer compiles under the latest Raspian. During my fiddling, I did get A2Server working, but only by finding an oldstable version of Raspian (which right now is Jesse, but that will change) and installing that on the SDCARD first.
Another option was to use a docker image on the Raspberry Pi, kindly provided by fluxo-gs. That worked in so far as the IIGS could see files on the local machine (the Raspberry Pi), but they were read-only. That wasn’t going to be good enough.
What I really wanted was a current version of Raspian with Netatalk 2.2.6 running (you can’t use Netatalk 3.x because the bright sparks that bring us Netatalk decided that the Apple II (and others) community didn’t need to be included anymore; 🤷♂️) that would let the Apple IIGS see the files it’s supposed to.
So, after a bunch of attempts with various versions of Raspian, and toolchains that worked in some part, I settled on finding a version of Netatalk 2.2.6 that would compile on the latest Raspian. I found this via the Netatalk forums where the compilation issues I’d seen with A2Server had been addressed. I grabbed a ZIP of that updated code, and called it netatalk-code.zip.
Connecting the Raspberry Pi to the Mac Pro
The other half of this setup is to get the files on the Mac Pro visible to the Raspberry Pi and placed within the volume being shared with the Apple IIGS by Netatalk.
To do the mount from the Mac Pro, I had two options:
Use SAMBA, the windows file sharing protocol.
Use AFP, the Apple file sharing protocol.
Obviously, my preference was to use AFP, so I spent a lot of time hunting down a version of afpfs-ng that compiles cleanly on the latest Raspian without any issues, like dependencies, etc. afpfs-ng is an AFP client. It also integrates with FUSE so you can setup a mount point with it via the mount_afp command. Don’t try to add a mount point using it in /etc/fstab, as that will just put your Raspberry Pi into emergency mode.
Getting afpfs-ng to compile on the older Raspian versions was way too hard, which was part of why I fixated on using the latest Raspian. That said, it does depend on a bunch of things like FUSE.
Although I got AFP working via afpfs-ng, sadly there are some issues with how it integrates with FUSE that prevents me from mounting the AFP volume at a mount point inside the Netatalk volume. FUSE issues a warning and says to use the “nonempty” option, however afpfs-ng, via the mount_afp tool it provides does not support this option. You can’t just delete all the files within that mount point though because Netatalk is creating stuff in there dynamically as part of it’s AppleTalk implementation.
So I had to settle on using SAMBA. 😔
SAMBA works; it just meant that I had to share the folder from the Mac Pro using SAMBA as well as AFP which is, to my mind, messy. I might have been able to use NFS instead but that would have meant setting up stuff on the Mac Pro that was only achievable via bash scripts (as far as I know) and I was trying to keep the Mac Pro “clean”.
Conclusion
Below are the steps I took to build up my Raspian image. It now boots, starts Netatalk, and after booting, kicks off a @reboot cron job to establish the SAMBA mount point inside the Netatalk volume. The steps marked with an 🍎 are not needed I believe, unless you want to get the AFP via afpfs-ng thing working.
My Apple IIGS is happy; it can see, read, and write files on my Mac Pro, and life goes on.
Steps
Install Raspian Lite (in my case bullseye)
Update config.txt on the SDCARD to enforce HDMI support. You can use this file if you want. Just copy it onto the SDCARD.
Booting the Raspberry Pi should prompt you to:
Set the keyboard layout.
Set your username to pi
Set your password to apple2
sudo raspi-config; and
If you’re using WIFI instead of Wired network, then configure the WIFI.
Change the name of the server to whatever you want. I used AppleIIBridge
where the first argument is the location of your mount point (see step 10), the second is the name of the share for the Apple IIGS to refer to, the third specifies what user to allow access, and the fourth and fifth are options relevant to this type of shared volume.
Then comment the line with the single “~” with a hash:
#~
I also updated the line beginning with :DEFAULT to:
Finally, if you want to automate the mounting of the SAMBA share, you can use this script, and place it within the file /home/pi/Documents/connectToMacPro.sh.
Remember to make it executable. Note that you need to replace <IPADDRESS> with the actual address of your Mac Pro (or equivalent).
Note also that the uid and gid are for the “pi” user. I got those by looking inside /etc/passwd and /etc/group.
#!/bin/bash
MOUNT_POINT=/mnt/A2Host/A2Files
MACPRO=<ipaddress>
function connect() {
echo "Trying to mount ${MOUNT_POINT}"
sudo mount -t smb3 //${MACPRO}/A2Files /mnt/A2Host/A2Files -o user=pi,pass=apple2,uid=1000,gid=1000
}
numfiles=`sudo ls -1 ${MOUNT_POINT} | wc -l`
for (( ; ${numfiles} == 0; ))
do
connect
numfiles=`sudo ls -1 ${MOUNT_POINT} | wc -l`
if [ ${numfiles} == 0 ]; then
echo "Unable to establish A2Files connection on Mac Pro, trying again in 5 seconds"
sleep 5
fi
done
echo "A2Files mounted successfully."
To get this script to execute after the system is up and running, add the following line to the sudo crontab using the command sudo crontab -e:
A few years ago (gosh, is it 5?), I played a game called Journey on my kids Playstation 4. It was like no game I’d ever played before. Here I was, wondering around the sands and ruins of a civilisation lost to time with nothing but my ability to call, to search and discover, and, to fly.
This game, Journey, by thatgamecompany really changed my idea of playing games, and what I wanted from the experience of playing a game.,
I found the joy of the mystery, wanting to know more, to grow within the game, and to reach a goal, that mysterious mountain in the distance.
And once I’d completed it, I of course wanted more. I wanted to play more games like this. So I searched in vain; there just wasn’t anything I could find at the time that resembled the experience. The closest I came was a work in progress called Omno by Studio InkyFox, but at that time, it was years off.
And then, on the 13th of September 2017, at the iPhone 8 launch event, Apple and thatgamecompany announced the new game Sky (which later became Sky: Children of the light) which was to arrive on iPhone, iPad and notably, the Apple TV 4K.
I was hooked. I began watching for evidence of a launch date and then early in 2018 I managed to get a place as a beta tester. Once I had that place, and was able to play, admittedly, on my iPhone, and not a nice big Apple TV screen, I was both dazzled, and really happy.
Sky was and is, an amazing achievement. I spent around 2 and a half years as a beta tester for Sky. I and a number of very very dedicated players and testers spent countless hours in the game having fun, but also really working hard to help the developers hone and tune the game to be the best it could be.
For a core group of us, it was a pleasure to go in and play, and report findings. Before I left Facebook, I took a dump of my data. Looking back at that, I have 522 videos I’d recorded (amassing almost 21 hours of recordings), in some cases edited and uploaded to the Sky beta group.
They were fun times, and during that time, I’d been to a couple of ComicCon’s with my youngest child, and discovered cosplay whilst doing so.
I thought it would be neat to create a Sky cosplay.
One of the iconic things about Sky and the children that run around in that world is the cape. The cape is life in Sky, and it’s also the means by which one can fly. The cape charges when your child is close to light, or in clouds, and when it charges, there is a lovely animation.
Now before I continue I should point out that my experience as a beta tester is primarily with the original feathered cape. Somewhere around March 2020, thatgamecompany discarded the original cape in all its beauty because it apparently took too up many resources.
When all this happened, the capes became plain mono-colour capes with no decoration or shape. Some time after launch, complex capes began to creep back into the game, but to date, so far as I know, the original cape has not returned. What follows is my attempt to recreate the original cape for a cosplay outfit.
What I wanted, was to re-create the classic, original Sky cape in my favourite in-game colour, purple. But I wanted more than that. I also wanted it to glow, and to animate in a manner similar to that of the original cape. So what does that look like:
This meant electronics, and more specifically, LEDs that I could program. But it also meant fabric that would let me see the light showing through.
I also needed to learn how to sew.
I started by searching online to find out how to make a cloth cape. There are a lot of sites online that show you basic patterns and techniques. Here is the one I found most helpful:
Its not really all that hard; getting the basic measurements is the trick. In my case, because I wanted to put electronics into the cape I needed to factor that in as well.
🚨 Now, pay attention to that website when it comes to the neck, because this is one area I failed in. I forgot to allow for the next and how to cut it out. I think it’s something I could have fixed, but I decided not to when I created the cape originally. The thing was that I wanted a fold-over, like a collar and I didn’t know how to do that, so I left it in the too-hard basket and focused on the stuff I could do.
For me, I decided on a cape with a radius of 90cm. I’m tall, and so is my child, so this would allow the cape to hang to roughly the length as seen in the game.
The next thing for me was creating the cape design.
Step 2 – Design
Unfortunately, because the cape I was building was no longer in-game, I had to draw on my old recordings and screenshots to allow me to rebuild the cape design so that I could put it (somehow) onto fabric.
So I spent an inordinate amount of time trawling through my recordings, finding a collection of reference images (all very grainy) so that I could see with some clarity the pattern and layout of the cape.
Below is a good depiction with some clarity, showing the entire cape.
Once I had this and a lot of other images ready, I set about recreating it in photoshop, shape, by shape.
I made the mistake at first by creating it as a circle, resulting in the following (showing where I saw myself adding the LEDs) :
Whilst this looks great, it’s completely wrong. When I printed this out, cut it out and tried to make it look like a cape, it was all wrong. Of course, I’d made the mistake of creating the design around how it would look when worn, wrapped around the Sky kid, not how it looks on a flat surface.
I realised that I actually needed to redraw the design on to a half circle. After all, when you look at your Sky kid flying, you don’t see a circle.
So after some more of hours in photoshop, I ended up with:
Now note a few things about the above diagram. First off there is a border outside the bottom of the petals:
It’s important to remember that because I was putting LEDs inside this cape, it would essentially be like a pocket. I would end up sewing two large pieces of fabric together and then turning it inside out. Having this spare cloth there at around the edge gave me room to do the sewing without affecting the design.
And printing this, and playing with it as a cone, we get:
Plan A
So, with a design that works, the next plan was to print out, to scale, each of the pieces that made up the cape pattern. You see, my plan was to re-create all of those designs by cutting out a piece of fabric of the right colour, and then sewing them all onto a large purple piece of cloth in the right pattern.
But before I did that, I needed to find the right fabric. So off to the local fabric store (in my case, Spotlight). I took with me, a small circle of LEDs with a controller chip to animate them in a manner similar to what I wanted. I wandered through the various isles of fabric, testing them for translucency, weight and atheistic feel.
So with fabric chosen, I decided with four in total:
The next step was to use my printed templates for all of the elements of the patterns on the petals of the cape to cut out all of the pieces so that I could then sew them onto the backing fabric.😲
Note the colour differences in the paper pattern. My plan was to multi-layer the cloth to get the visual effect.
I went to the trouble of borrowing a sewing machine because I thought it would be 1. quicker, and 2, I’d get a better, more uniform stitch. The only machine I could find was quite old, and we couldn’t work out how to change the stitch. So my first and only attempt to use it as a test resulted in:
And soon after I started, I realised just how much of a task I’d set myself. I quickly discovered just how bad I am at cutting fabric, sewing fabric, and so on. It became obvious that the end result was going to fall far short of the vision I had in my mind.
Plan B
What could I do? Well, one of the things I’d seen my child do was put their artwork on a site called RedBubble where other people could then purchase items like socks, or stickers using my childs art.
I discovered that I could upload an image of the cape design, and have it printed on a large “wall hanging”! And by doing that I basically got out of having to sew all of the detail of the feathered pattern on the cape!
So after a week or so, the wall hanging showed up, and it was perfect! This would become the outer face of the cape. The fabric was a little thinner than what I’d planned to use/create in my Plan A, but it was worth it because it just saved me a great deal of stress.
Back to Spotlight I went, and bought a large piece of heavy, dark purple fabric that would act as the back to the cape. It was time to get started.
Step 2.5 – Making the cape itself.
Lets step back a bit (I know, this has being going on for a while already). The cape, because it needs to contain, and conceal the electronics, needs to be like a pocket, a giant, semi-circular pocket, sort of like a pita bread…
So to make the cape I needed to take the wall hanging, and pin it to the backing fabric, but with the patten face-down.
It’s important to be generous with the pins. You want the two pieces of fabric to be as one, so that there is no movement of one or the other as you sew them together.
This was the single longest process for me in the entire exercise of making the cape. Without a sewing machine I needed to sew the entire semi-circular outer edge of the cape, one stitch at a time. I also spent a bit of time, because I was hand sewing, finding out which stitch to use. I settled on the “Back stitch”. Here is a link to a good tutorial on how to do a back stitch:
Note that I’ve pinned along the outer area of that purple border that I mentioned earlier on. That keeps everything secure as I sew close, but not on the line between the border and the beginning of the printed pattern.
The reason for this is that when you finish sewing and you turn it out the other way (so that the pattern is on the outside), the edge of the pattern is visible and not lost to stitching.
Once I’d sewn all the way around (and it took me a week of nights to do this), I was then able to cut around the curve, along the outer edge of the purple border, and outside the pins.
I could now turn the cape out for the first time to see the results of my handy work.
So you can see that sewing as I did, once the cape is turned out, the line of the pattern is just where I wanted it to be.
With the curved side of the cape done, the straight edge needed doing. This was different because I didn’t want to sew it. Along this edge I’d left a good few centimetres of excess. If I folded that over and sewed it, like a hem, I’d end up with stitching all along the straight edge, right through the pattern and that would be ugly.
So I bought some iron-on adhesive tape.
And then proceeded to fold, and then iron down the hems of both the outer fabric (from the wallhanging) and the inner fabric. This gave a really nice finish.
In most respects, the fabric cape was now mostly complete; mostly…
I needed to create some wrist bands and sew them to each wingtip of the cape. For this I took a 5 centimetre wide strip of the leftover backing fabric, placed some of the adhesive tape down its centre and then ironed each side over the edge, providing a nice purple wristband.
I also bought some small velcro dots for help with closing up the opening when the time comes.
Time to add some life to this thing!
Step 3 – Electronics
Now that I had the fabric element sorted, I needed to give some focus to the electronics.
I wanted to embed inside the cape a lot of LEDs, and I wanted to animate them in a way that would roughly approximate the animation seen in the original cape.
Here, I’ve slowed down the cape charging animation as a reminder:
So searching online, I found that I could get rolls of LEDs that are programmable, allowing me to set the colour of each individual LED on the roll. Each roll was 5 metres long. I ended up settling on measurements that allowed for 20 strips of LEDs containing 20 LEDs each.
These rolls contained WS2812B LEDs that are programmable, providing a full RGB range of colour. These were perfect. Now how to control them? Well there are a myriad of tutorials online on how to do that. A couple that I referenced are:
These were great because I had an old RaspberryPi lying around so it allowed me to prototype and play with the animation sequence; to test whether I could make it do what I wanted. This was all done with a small LED board from my local electronics retailer.
Here is the resulting animation on that LED board.
Note that it differs in one key way from the in-game animation. In the game, charging leaves a glow remaining in the cape, and I couldn’t do that simply because doing so meant leaving all of those LEDs on a lot longer. These smart LEDs are power hungry, and if the wearer didn’t want to carry around a massive battery on them, I had to make this compromise.
With the animation worked out, I now needed to work out how I could translate this little circle into a cape. I also needed to find a solution to control the animation other than using a RaspberriPi which is too big, and also, too power hungry.
And I discovered that the Arduino family has just what I needed. The Gemma M0! This tutorial showed me a lot of what I needed to know:
The Gemma M0 is a terrific little board, with a CPU and all of the connectors I needed. I needed a single data line to drive the LEDs, and another to act as the button sensor.
The original rough wiring diagram I came up with was like this:
But the problem with this was that with 20 LED’s per strip, and 19 strips, that is 380 LEDs, and there is a limit of how many LEDs we can have in a single strip.
Part of this is memory, but part of it is the CPU speed. Because I was using a relatively low spec chip, there was no way I could animate all 380 LEDs independently without having issues with performance.
So I chose a different path; the data line that drives each of the 19 strips would be run in parallel in the same way the power does. This meant that the CPU only has to manage 20 LEDs in total. It also meant that each strip would be animated in sync with the rest.
To do this, in between each LED strip inside the cape I needed to run wires joining each strip in parallel. I needed this to be flexible but secure. What I settled on was creating a tiny little circuit board by cutting them out from a larger “Vero” board. Each little board would have 3 lines, one for each of 5V+, Ground, and Data. Here is a rough diagram of what I needed to do:
This was laborious but necessary. I cut out each piece of veroboard (or stripboard), filed the corners so that they were rounded and smooth, and the proceeded to solder them all together with small lengths of ribbon cable.
I then used more lengths of ribbon cable to connect each board to an LED strip (🚨important, connected to the INPUT end of the strip).
After putting this all together, it was time to lay it all out on the backing fabric on the inside of the cape.
Looking good. What about in the dark?
So whilst this is looking great, and shows that all my efforts at wiring and programming is working, there is a problem with the LEDs.
I don’t have a clip of it still sadly, but what I noticed was that the LEDs showed through the cape material very clearly as little dots. So I needed something to diffuse their light that would be inserted into the cape over the top of the LEDs without making the entire cape over bulky.
Testing a piece of material at the store I found this worked. You can see most of the lights are diffused, but a few are not, showing the difference:
The material I settled on was this stuff:
So to demonstrate this in a raw manner:
And using a piece of this as a test on a single strip of LEDs inside the cape produced this:
And to take that a step further, in this clip you can see the difference between diffused and natural.
This diffuser was flexible, but it was also quite stiff. Having the entire cape lined with it was going to change the way it hung on the wearer. So what I did was cut long strips of the material, roughly 2 to 3 centimetres wide. I would then sew them down over the top of each LED strip within the cape.
One thing I’ve not really talked about (what really? You’ve gone on, and on, and on…) is how to power this thing. There are two parts to the electronics; the Gemma M0, and the LEDs.
The Gemma M0 needs a 3.2V to 5V power source, and the LEDs need a 5V power source. Now the Gemma M0 can run for ages on very little but those 380 LEDs need lots of power.
I elected to use two power supplies. For the Gemma M0, I bought a small 2AA battery holder, and for the LEDs I settled on a battery pack that you might use to charge your iPhone. One trick with doing this is that because the Gemma M0 and LEDs are still electrically connected, the respective GND from each power supply need to be connected.
So how is the entire wiring done? Well here is another of my terrible diagrams to answer that:
Step 4 – Final construction
With everything now worked out, it was time to put it all together. This was pretty straightforward, and involved the following:
I purchased a “wearable” push button that I then attached at one extreme end of the cape, where the hand would be that moves the cape. This button was then attached via two wires to the Gemma M0 to act as a trigger for the charge animation.
Stick each of the LED strips onto the inside of the backing fabric. The LED strips I’d bought had adhesive backing so this was pretty easy.
Each of the little circuit boards I made joining the strips together needed to be insulated so I covered them with insulation tape. This also helps to secure the wires and reduce the risk of them breaking as the cape is worn.
Each insulated circuit board was then adhered to the backing fabric to further reduce wear and tear. I probably should have used a softer, more flexible wire. I went with ribbon cable because it’s cheap and I had some.
I then sewed a strip of the diffusing fabric over each LED strip. This was done pretty roughly; it just needs to be secure enough to stay in place.
The Gemma M0 needed to be sewn into place so that it’s not floating around.
The pushbutton was sewn into place near the tip of one edge of the cape and the wires between it and the Gemma M0 secured.
A small “button hole” was cut and sewn near the nape of the neck so that power leads from the cape could be fed through. These power leads could then run down the back of the wearer to belt mounted battery packs.
The two wristbands I’d created were then sewn into place.
The small velcro dots were stuck into place along the inside straight edge of the cape so that the opening could easily be closed, whilst leaving me room to open it all up again in case there is ever a problem.
The End Result
Sadly, COVID got in the way just as I was finishing the cape, and as we planned to go to our first ‘Con’. We had to rush a mask together, and we managed to attend SuperNova Melbourne 2020 before everything changed.
Our plan was to add more to the cape, to add animations to the diamonds down the back, and to add a glowing chest light as well, but life changed, and focus shifted.
The final cape can be seen in the following video, at home as a test, and then very briefly at the ‘Con’ (we were a little uncomfortable videoing amongst other people).
In October 2018 I released version 2.0 of World of Hex, and this one came with a massive expansion to the game that for some reason I never thought was worth writing about. 🤷♂️
Further to that, in August 2020 I also added macOS support which was far more significant than I had expected it to be at the time.
Now, in 2022, as I embark on another series of enhancements to the game, I thought it might be worth writing a little about the process of adding an entire (well sort of) solar system to the game.
When I first built the game I already had the Moon (or Luna) moving around the Earth at the right time and I was quite happy with that, even if I might not have got some aspects of it right. I had had thoughts of adding a colony to Luna but that wasn’t part of the original plan, so I launched the game with just Earth as a playable colony. It seemed like enough.
But then in 2018 I got the itch to expand, and with Mars being a hot topic I thought it would be neat to not only add a colony to Luna, but one to Mars as well. After all, with everyone wanting to get to Mars (like how many Netflix series are there taking people to Mars?) first I thought it would be a good draw card for the game.
Add to that the fact that I’d been thoroughly enjoying watching The Expanse™ on Netflix, that itch grew and I began to wonder just how hard it would be to “expand” the game to encompass more of the solar system, providing colonies on much more than just Luna and Mars.
Prototyping a solar system
So it was time to start learning how to build a model of our solar system. Google was my greatest initial resource along with Wikipedia, and soon I happened upon a set of data that I could use to model the positions of each of the planets in the solar system.
I started off with a small Swift project just as a prototype to see if I could do what I wanted in SceneKit. The existing app used SceneKit to display the Earth (a sphere) surrounded by a Hexasphere, with another appropriately scaled sphere (Luna) orbiting the Earth.
This prototype adopted a set of Orbital Elements, or Keplerian Elements to place each planetoid (as I called all of the planets and Luna) in the correct position given a date & time. I was thus able to render a basic solar system using SceneKit quite easily.
I was then able to take that prototype, rewrite the key elements of it in Objective-C and integrate it into World of Hex.
That then meant I needed to craft Hexaspheres for both Luna and Mars, that would act as a representation of the colonies on those planetoids. I built these based on my own naive ideas of what the colonies on those planetoids might look like (shape wise). (In hindsight I think the Mars colony is very poorly laid out and ignores the topography of the land entirely 😬)
At this point, the core functionality supporting multiple worlds with colonies worked, and I could have these three colonies but I needed to add a way for the player to move between the colonies.
Enter the solar system view. As I mentioned above I’d been watching and thoroughly enjoying The Expanse™, and one wonderfull thing I saw in a number of episodes was this holographic, interactive rendering of the solar system, showing the trajectories of planetoids (and in this case, spacecraft). Here is one example from Season 1, episode 6:
I wanted something like this. And I wanted, when a player chose to shift focus to a colony on another planetoid (at this time, Earth, Luna or Mars), the camera to be manipulated so that it looked as if the player were in a spacecraft, and that spacecraft would complete a partial orbit, turn to face the destination and then travel there.
This video shows what I was aiming for in this. It’s rough, based on a very early attempt at the animation.
I spent what seems now, a ridiculous amount of time (months) obsessing on getting this animation working. In the end, it beat me. I could not get it to work the way I wanted it to.
What I’d wanted in my I-am-not-a-pilot-or-astronaut position, was to compute the orbit position on the current planetoid that is closest to the destination planetoid, travel in-orbit to that, turning at that point to face the destination, and then travel to the destination, keeping in mind that it is moving in space as the spacecraft moves (so in essence, travelling to where it will be when my spacecraft is due to arrive). To this day, I’m still not sure where I went wrong.
The code is still there inside the current version of World of Hex; it’s just not enabled anymore. Apart from the problems I had getting it to work (so, so many variables), it also became obvious that if the player just wants to move from, say, Earth to Mars, the transition, whilst nice, would probably end up taking too long, and the player would just find it boring.
Below is a snippet of what it looks like when I re-enable that code within the current baseline.
At this point, being heavily inspired by The Expanse™ I started to think about adding more colonies. That also meant adding moons. Then of course in The Expanse there are also colonies on some of the larger asteroids.
This presented a problem for me. The Keplerian elements are great, and the implementation was remarkably easy given some hard work by people much smarter than I am, but you can’t use the same mechanism for some of the moons or for the asteroids. I couldn’t find data for them until I arrived at the realisation, with some advice from my online friends at StackExchange, in this case the Astronomy StackExchange.
I ended up in communication with the kind people at NAIF and one particular person from the Astronomy StackExchange, who gave me some advice and guidance on what I needed to do to get the information I needed from their CSPICE library. This required that I port their library (which itself is a conversion of the original Fortran into C) to compile and be usable on iOS and macOS. macOS was easy, but I found that the library assumed that files that it reads are in the “current directory” which is not going to be the case on an iPhone or iPad. So I went through the process of adding a wrapper to the library, and adding a way to tell CSPICE where to look for any files it needs to access.
This all worked a treat, and I was also able to get their TSPICE library of unit tests running, allowing me to validate what I’d done.
I’ve since been given the OK by the people at NAIF to make the GIT repo’s for both public:
With CSPICE in place and working, my next problem was that the data required to feed into CSPICE to provide the paths of all of the planets, moons and asteroids was bigger than I could reasonably load into memory.
As mentioned in my stack exchange query, I then used the SPKMERGE tool to generate ephemeris files that contained just the data I needed for my app. The app then loads those files via CSPICE and generates the positional data it needs. The data currently built into the app should be good till 2050. If World of Hex is still running by then, then I’m afraid it will probably be someone else updating the data files.
At this point, the solar system managed by World of Hex now comprised (with colonies on those with a ⬢):
Sun
Mercury
Venus
Earth ⬢
Luna ⬢
Mars ⬢
Phobos
Deimos
Jupiter
Io ⬢
Europa ⬢
Ganymede ⬢
Callisto ⬢
Saturn
Titan ⬢
Uranus
Titania
Neptune
Pluto
Ceres ⬢
Pallas ⬢
It’s worth mentioning here that both Phobos and Deimos, moons of Mars move so fast that it became too difficult to keep the camera focused on them, so, apart from the Sun itself, these two planetoids are the only two you can see, but not visit.
Textures and visualisations
Another thing to consider, now that I had positional data, was providing textures for each of the planetoid bodies. For this, initially, I relied heavily on a website called Solar System Scope. (Incidentally, this really is a terrific resource, and it gave me some inspiration at well). From here I was able to obtain high quality textures for most planets, and for those I could not get, I sourced from sos.noaa.gov, or from artists who contributed to the Celestia project. Credits are all provided in the game on the credits screen (if you can find it…)
For the two asteroids, Ceres and Pallas I confess to cheating and just treating them as spheres so that I could use Hexaspheres to represent their colonies.
The Great Conjunction
Back on the 22nd of October, 2020 I was able to record The Great Conjunction, the lining up of Saturn, Jupiter and Earth from within World of Hex, showing that my use and implementation of the SPICE libraries had worked. Here is a short recording I shared with the folk at NAIF:
Scale; the art of fitting an entire solar system onto a screen.
One of the things I had to wrestle with was how to deal with the vast distances that the planets, and especially, the asteroids and Dwarf Planet Pluto travel, and still be able to show more than just a dot for the inner planets.
If you look at most implementations, and Solar System Scope is no exception, it’s impossible to fit it all on a single screen, whether it’s an iPhone or a 42″ TV.
So whilst World of Hex correctly computes everything, the representation is all scaled in post processing of the computed positional data so that:
You can see everything within reason; and
It is still a reasonable representation of the distances.
In addition to scaling the distances, I also took the step to scale the size of the planetoids, and this scaling changes between when you’re in normal play mode, focused on one planetoid, and when you’re in the solar system view mode. When in solar system view mode, the scale of both the distances, and the sizes is dynamically recomputed so as to make the solar system easily navigable.
This gave me an approximation of what I’d seen on The Expanse™ that I was happy with.
Now, if our iPhones or iPads had a 3D holographic projector that could place the solar system all around us as seen in The Expanse™ I’d be looking at how to make that happen…
So what’s going on now?
I’ve written this (rather long and rambling) post because I’ve been working on some changes to the game as part of a push to improve it’s presentation, perhaps open it up to more players, etc. I’ve just finished a 2 month effort to create the first large update since bringing the game to macOS back in August 2020. The update notes for v4.2 are longer than I ever intended them to be.
I initially intended “simply” to add portrait mode to the iPhone. As part of doing that, one thing I’d needed to fix was how I put text on the screen.
When I originally developed World of Hex, the SKLabelNode, part of SpriteKit, did not support Attributed text (text that can change style) so I’d built a way to do this that worked, but not well.
Apple have kindly added native support for Attributed text in the meantime, so I’ve moved to using that, and an entirely different method of creating that Attributed text for display via a wonderful library of code called SwiftRichString. My expectation now is that I will be able to add support for more languages, thus widening the audience of the game.
The full update notes for version 4.2 are listed below:
World of Hex is starting a new phase of enhancement, and to kick this off, the way text is displayed on the screen has been improved, so if you’ve seen a partial message, or been confused about something, perhaps this will help.
On iPhone and iPad, support for Portrait mode has been added, at last! Yes, you can now play World of Hex one-handed on your iPhone or iPad whilst standing on the train commute to work. With this rather large change a number of related bugs have been fixed that have been plaguing players for a while now.
New users are now introduced to their AI module right from the get go, so that they know there is more to the game to build towards. Did you know that you can program your own AI with commands to defend the world tiles you win? No? Well all you need to do is reach level 2, and access will be granted!
The leader board now displays a more information about how well those players at the top have been doing.
For those of you that have noticed that the Earth seemed to get confused about just where the Sun actually is, and turned the wrong face to the Sun, I believe I have finally fixed this. It won’t affect play at all, but it affects my sanity. So this way I get some peace of mind too.
Finally, finally, World of Hex no longer stops your favourite music from playing when you launch the game! I’m sorry this one has taken so long.
Panning a world on macOS now works as nicely as it does on iOS, iPadOS and tvOS. And it’s even nicer on those too!
The player information panel now shows a small meter to indicate to you how much more you need to play before reaching the next level.
Experience points (XP) can be earned faster now with the addition of a “win multiplier” for each world tile. If you, or someone from the same faction keeps winning consecutive games in a given world tile, then XP are earned much faster.
On macOS, panning to rotate the selected planet or moon using two fingers now works as smooth as silk (finally).
Fixed a number of nasty memory leaks and buggos.
Fixed a problem where the colour of a tile was not changing after you win a game. This bug was introduced back when I added the Game Center Achievements. My apologies; it’s fixed again.
A note for the wary. Apparently, if you force kill World of Hex, iCloud can get it’s knickers in a knot and stop sending the app the background notifications that allows it to keep the state of the world tiles up to date. If you’ve done this, and think the tiles are not accurate, then the only way to fix it (apparently) is to reboot the phone. Not my preferred advice to anyone but it’s all I got. I spent quite some time trying to work out if I’d done something wrong, but no…
On the Apple TV, something special, well I think so. You can now zoom in and out in little bits.
And where to next?
Now that v4.2 is out there, the current roadmap of things I want to do are:
Allow it to be played offline (which is needed for an Arcade title I understand).
Add portrait mode on iPhone.
Add localisations to the game to make it more accessible to non-English speaking players.
Add accessibility to the game.
Add controller support, especially for tvOS.
Add more visual polish and eye-candy.
Add more moons and asteroids.
For fans of The Expanse™ add a pocket universe and allow the Ring Gate to be activated.
Add an Easter egg or two. There is already one (the credits scene) if you can find it.
I call this a roadmap which implies some sort of order. As you can see, the order means little. Some items are easy enough to do, though cost money (and believe me, this game does NOT pay it’s way) because I need to pay others for things like translations.
If you’ve made it this far, thanks for reading. I hope it wasn’t too hard to follow.
Back in the 1980’s, when I used to spend way too much time playing games on my Apple IIGS (and earlier, my Apple IIe), one of my favourite games was Fortress, by SSI.
Fortress gave me a small game board where I would fight it out against one of several computer AI’s, where a game consisted of 21 turns, and whoever controlled most of the game board at the end was the winner.
One of the things I loved about Fortress was the way the AI’s got smarter with time. When you first started playing, it was easy to win, but after a few games, it became more challenging. This kept me coming back to Fortress as I felt I was playing against something that basically learnt as I did.
As a programmer/developer, my mind is rarely idle, and I always have a project on the go. In the 1994 I thought it would be neat to rewrite Fortress for the Apple IIGS, using higher resolution graphics.
I started doing this with the ORCA/Modula-2, which I had recently brought to the Apple IIGS with publishing help from The Byte Works and some connections at Apple.
As part of writing this blog post, I’ve run up my Apple IIGS environment (yes, I still have all of it) within the wonderful Sweet16 emulator and found that code:
I hadn’t realised just how much of the game I had written. I thought I’d only written a bit of the game logic, however it turns out I’d written a lot of the UI as well, as can be seen from when I ran it. The AI’s hadn’t been written but the basic building blocks were there.
The funny thing is, I have the code; I have a compiled binary that I can run, but I can’t remember how to re-compile the source code anymore. I’ve got a build script there, but my memory fails to help me out.
One of these days I should bring all that code out, and store it somewhere safer.
Around this time, I got distracted and much of my home based projects took a back seat, Fortress included. My work took me away from Apple development entirely for around 15 years.
So Fortress GS was left on a floppy disk (or two) in a box of backup floppies along with everything else.
Then, in 2012, after I’d been back developing for Apple hardware again for a few years I got the bug again, and, having recovered my entire Apple IIGS development environment from hundreds of floppies and some second hand SCSI drives (my how they’ve grown; did you notice the size of the “M2” hard drive above?), I was able revisit Fortress GS.
I ported the guts of the code to Objective-C and wrote a basic prototype to show to another developer at the time as a proof of concept. This one was really basic, but it allowed me to place moves for both sides by tapping the screen.
I showed this to a designer I knew at the time who thought the idea was great, but suggested that it would be more interesting with a hexagonal grid rather than the rectangular one.
I toyed with the idea at the time, but I did nothing with it; I had other projects happening, and I wanted to focus on my educational apps.
Moving up to 2016, and the release of the Apple TV, I launched my latest educational app, Tap Tangram (which I later launched as Classroom Math Drills), and due in part to my failure to recognise that I’d missed my target, and the complete lack of featuring by Apple, the app never gained any traction and failed at launch.
That left me wondering what to do next, and then it occurred to me to reboot the Fortress app idea once again. I’d also recently read a most-excellent blog article by @redblobgames about manipulating hex grids in software, so my mind was abuzz with what I could do with it.
Enter World of Hex, my latest, and final attempt to reimagine the classic Fortress for iOS and the Apple TV.
I started out just playing with the hexagonal grids code that I wrote as a port of the code provided by @redblobgames and getting the basic board working with the underlying move computations.
Once I’d done that, I sat down and brainstormed how I wanted the app to work; how the game would play and during this process, I asked myself:
“What if, rather than a simple rectangular grid of cells, we had a map of the world as a map of hexes?”
And then I got going.
“What if, the terrain was somehow represented in this 2D map of hexes. Rather than try to represent the 3rd dimension as a true 3rd dimension, colour the hexes to represent the terrain.”
and
“Hmm. how many cells?”
“Earths land surface area: 150,000,000 km2”
“If we say each hex has a real world “size” of 1km, then we need to be able to map out 150 million hexes eventually. Even if they aren’t all being used by players, we need a way to know where on the earth a hex maps to land.”
“So, what is probably easier, is to map the entire planet with hexes, and then mark some as usable land, and others as ocean, unusable land, etc. that means a lot more hexes in the database though. It means millions of hexes to cover the planet completely. too many.”
“Will performance be an issue? yes.”
And so it went; with performance an issue and no real idea at that point of how to make it all happen I went hunting for others that had build a world of hexes. I needed to get an idea of:
Could I get the basic mechanism to work on an iPhone
How many hex tiles would I need to build a reasonable approximation of the Earths land areas?
How would it perform if I built a model with all those tiles?
After some searching with Google, I happened upon the wonderful Hexasphere.js by Rob Scanlon. This gave me hope. If this could be done in a browser, then I could do it.
So I set about to port his Hexasphere javascript code to Objective-C to see what I could achieve.
This is where I started to hit upon the boundaries of my knowledge of 3D modelling and SceneKit. I also found myself struggling with some of the math concepts involved, having to trust in these people that obviously handle it better than I.
I did get Hexasphere working, though it was extremely slow because every hexagonal tile was being implemented as a separate SceneKit node. It did work, but it just wasn’t going to cut it for a production quality game. At this point I was using very large hexagonal tiles, so the tie count was still quite low. Once I increased the resolution of the model, there would be a lot more.
I ended up posting a question or two on the Apple developer forums and the Games Stack Exchange. These helped me better understand how to improve the performance of my 3D model however I was still hitting problems in that the on-screen representation of the Hexasphere was not high enough quality.
I spent several weeks working on it and getting some great help from colleagues who knew math, and 3D rendering far better than I. The end result of that was a perfectly rendered Hexasphere using only 4 SceneKit nodes that rendered at a full 60fps on devices as old as the iPad2. The change was to put all of those tiles into a single model, and to colour them individually via the shader and it’s inputs.
I finally had what I needed to get on with the game.
At this point it was just a matter of bringing all of the pieces of the puzzle together and making them work well.
For this game, the main pieces were:
The hexasphere code
The Hex Grid code
SceneKit and SpriteKit
CloudKit (iCloud based database)
I’ve already spent enough time on the hexasphere and hex grid, so I’ll try to restrict the rest of this post to the hurdles I had finishing off the app and bringing it all together.
SceneKit and SpriteKit
Apple’s engineers have done a wonderful job of these two API’s. Having developed most of my apps with Cocos2D, the transition to SpriteKit and SceneKit was pretty painless. The primary difference for me was the coordinate system.
The main reasons I went with Apple’s frameworks this time were:
I wanted to be able to render the 3D world, which Cocos2D wouldn’t do.
I also wanted to branch out and learn something new.
That said, the trick was that I needed to be able to overlay my 2D game components on top of the 3D components. After a little research I discovered that Apple had kindly given us an “easy” way to do this via the overlaySKScene property of the SCNView class.
This works remarkably well however it does introduce some problems because there are bugs in the Apple frameworks (at least, there are at the time I write this). I found that there are some things, like animations of the SpriteKit nodes that need to be forced to be done within the SceneKit renderer thread. It seems that Apple use a multi-threaded renderer for SceneKit/SpriteKit and some operations that you’d expect to be thread safe, aren’t.
With a lot of help from Apple Developer Technical Support, I found and fixed this problem and filed a bug report #32015449 (github project) accordingly.
Another issue related directly to the use of overlaySKSCene was an incompatibility with the tvOS focus engine (it basically doesn’t work). I ended up having to port a focus engine I’d written for Cocos2D on tvOS and enhance it to work with World of Hex. I’ve also filed a bug report for this issue: #30628989 (github project).
Apart from this, SceneKit and SpriteKit work a treat and have made my life so much easier.
CloudKit and iCloud Integration
Once I’d decided to expand the original game beyond a single game board, and to allow people to play games in a world of game boards I needed a way to store the game boards in the cloud so that everyone sees the same thing.
When I started to develop this idea my family and I were enjoying Pokemon GO for the novelty it provided. As a user, one of the things I really didn’t like about Pokemon GO was the way it forced users to either associate our existing Google account with the app, or to create a brand new Google account just for the game. There were other options, but they all involved forcing the user to log into a specific account, just for the game.
So I looked at Apple’s CloudKit which is just one part of the whole iCloud service layer that Apple has been building and developing for years now. One of the beauties of CloudKit is that for every person using an iPhone or iPad that is logged into iCloud, an app integrating CloudKit will just work because there’s no explicit login required.
This is what I wanted. On the whole, the CloudKit integration was very straight forward and it does just work.
I really enjoyed the ease with which Apple have allowed us to define our database structure via the CloudKit dashboard, make changes and even migrate those changes from development to production environments painlessly.
If there is one thing that I found lacking it is that in the dashboard, there is no way to simply remove all existing data without also wiping the database model itself.
Conclusion
World of Hex has grown far beyond what I originally set out to write. It’s nothing like my original attempt back in 1994 on the Apple IIGS, and even my really early brainstorming of last year differs somewhat from what I’ve built.
One of the reasons I build these apps is for the challenge and to keep my active mind busy. I certainly don’t make much of an income from them (though, mind you, I wouldn’t complain), so there’s a lot of satisfaction in having an idea realised and released into the world. Yes it can be crushing when it doesn’t take off, but, as I mention in the credits scene within World of Hex (can you find it?), “Never Give Up”.
Learning some of the quirks of Apple’s frameworks has certainly been a challenge. Cocos2D has been wonderful to work with over the years, and in some ways it’s more mature and easier to work with than SpriteKit, however SpriteKit’s deep integration is hard to pass up now that I’ve learnt it.
SceneKit offers some pretty amazing functionality from my point of view. I remember, as a teenager back in the early 80’s having a book with some algorithms for 3D line art animation that blew me away at the time. Being able to draw a model in your fave modelling tool, drop it into Xcode and have it on a device screen in minutes is insanely great. For developers out there that think its tough work creating an app, you have no idea how spoilt you are.
If you’ve read through all this, then thanks for staying till the end. It grew somewhat longer than I’d planned.
Here it is, my World of Hex. I hope you take the time to have a game, and that you enjoy it.
I’ve been working away on my latest app, and was just creating a new piece of artwork for the splash screen. When I did this, I wanted to start with the iPad Pro 12″ and scale down within Photoshop to maximise the quality of each asset size.
For all of my other assets, I had started with the iPad for some stupid reason and got my math all confused.
So I went hunting for a guide, and found an old site from Ben Lew of Pi’ikea St. It was a little out of date, plus I really wanted to calculate the sizes using the iPad Pro as the starting point.
So I took Ben’s page and popped it into a spreadsheet. The result is available below for download. I’ve also taken a screenshot so that you can see it easily.
In Photoshop, something Ben taught me to do a few years back was to create a layer called “default”, and, in order to get Photoshop to export the various layers in my file as appropriately sized assets, add the sizes as percentages along with folder names.
For me, assuming my originals are for the iPad Pro 12″, this means I give my ‘default’ layer the name:
As some may be aware, the Parse service is to be shutdown on the 28th of January. Parse gave developers 12 months to sort ourselves out and find another place to host our data, and drive our services.
I’ve been using Parse for a couple of years now, to provide a push notification service to my uAlertMe app. I was looking at removing the app from the app store, and discontinuing support, because I couldn’t find a cost effective way to keep it all running. uAlertMe is an app that sells perhaps 200 copies a year, so there’s not enough income to cover monthly service fees.
Then, late in 2016 I saw a message on one of the local developer groups that Buddy had established a relationship with Parse, and were providing a wonderful migration tool to allow us, in a relatively pain free manner, take our data from the existing Parse service, import it into Buddy, and (hopefully) sit back.
Now I’d have to say that it wan’t quite that easy. Because I jumped on board pretty quickly, and because I wanted to use the Push notification system, I was wanting to use features that hadn’t been completely polished.
So, with some really terrific support from the kind folks at Buddy, I worked on getting everything working, and as of today, iAlertU and uAlertMe are happily talking via Parse on Buddy.
So, if you still haven’t migrated your Parse data, and are wondering what to do, you have 21 days (as I write this) remaining. Get on over to Buddy and get the process started.
Sort of late posting this here (it’s been on FB and Twitter for a week now).
I’ve discounted most of my educational apps till the end of the year. This is a great opportunity to pick up some great educational apps for those iPad and iPhone gifts:
Please like/share and if you purchase, please rate/review the app.
You know, I think Apple and their incredible focus on accessibility is amazing. My daughter and I are currently waiting for Apple to review our first sticker app, called Heroes and Villains Stickers.
My daughter put all the artwork together, all drawn on her iPad using the ArtStudio app, and I did the “programming” bit (though programming is a stretch given Apple have made it so easy to put a sticker app together). This project is a labour of love for her, as she’s a real fan of the characters she’s drawn.
As we put it all together I realised that all those stickers had names, and that those names actually have function and meaning as you put the sticker app together.
Sticker Properties
When all was ready to submit, I got my daughter to go through all of them and name them with text that reflected the meaning of the stickers.
That has paid off in the final product as with voiceover turned on, tapping on a sticker causes the iPhone to read the name of the sticker out loud.
This is terrific as people with impaired vision can enjoy the stickers too, and send them to people that can see them, knowing that the sticker means the right thing.
It is details like this that make it easy for me to continue working with Apple and their ecosystem as I know that when Apple talk about inclusivity, they mean it.
I’ve been waiting to see what Apple would announce this week. I’ve been hoping for an excuse to upgrade my 2013 MacBook Pro (with retina), partly because I’d like some more storage (It’s amazing how quickly a 256GB drive gets filled once you start developing apps), and because my eldest is about to start University next year and I thought upgrading would mean I could give her my more than capable existing MacBook Pro.
At the moment, she’s working with my original MacBook, a 2009 white unibody MacBook which, as can be seen, has had better days. It still works a charm, though it’s been running hot, with all fans howling for a year or two now.
So, like everyone else working with a MacBook (Pro), I was keenly awaiting the refresh of the line this week. The rumour mills are pretty accurate these days, so we already had a fair idea of what to expect as a minimum, however to be honest, I’d been hoping for more.
I’d also been hoping for a realistic move forward from where I am now, and I really don’t think Apple have provided this.
Dongles Everywhere, it’s the future man…
Last year I saw the new 12 inch MacBook released with it’s single USB-C port and quickly wrote the device off as a waste of time. The lack of a separate power connector, and specifically, a MagSafe power connector was to me a huge step backwards. The smaller screen size made it doubly less attractive.
When I think about buying a new computer, I like to think I’m buying something that will allow me to continue from where I am, and transition over the next few years to a point where I’ll be ready for the next transition.
With the 12″ MacBook, and these new MacBook Pros, this is not the case. If I were to upgrade to one of the new MacBook Pro units, none, repeat, none of my existing peripheral hardware would be able to connect to it without these stupid, ugly white dongles.
Here is an image from dailytech.com showing the mess Apple is moving us towards (this for a 12″ MacBook, but you get the idea):
Every day, I connect to my MacBook Pro via the standard USB connector, various iPhone or iPad devices, other devices to charge, external drives for backup, and so on. Other people have more than I do.
If I were to ‘upgrade’ to one of the new MacBook Pros, I’d have to find a way to connect all of these devices via a USB-C port. What’s more I’d need to purchase a number of these ugly white dongles (why can’t I get them in a colour to match the MacBook I’ve just hypothetically bought?) if I want more than one plugged in at a time. Apple ‘kindly’ gave us 4 USB-C ports on these new MacBooks but that just encourages us to purchase more dongles.
Pricing
Note: OK, I’ll be quoting Australian dollar amounts here, but they should be indicative of other markets.
Here in Australia, the starting prices (it’s not even worth looking at the spec’d up prices, really) for each of the 3 new MacBook pros are as follows:
13″ MacBook Pro (no Touch Bar): $2199
13″ MacBook Pro (Touch Bar): $2699
15″ MacBook Pro (Touch Bar): $2999
Apart from the Touch Bar, there are some other subtle (or not so subtle) differences, the main ones being the speed of the CPU and the amount of storage.
Now, like many of you, and certainly, many iOS developers who, contrary to popular myth, aren’t living the high life off app sales, those prices just aren’t viable to me. My current MacBook Pro was bought as a refurbished unit from Apple, and it’s probably the way I’ll go next time now that I’ve seen these prices.
Apple are basically saying to me, “Hey Peter, we understand you can’t afford our sparkly new Touch Bar MacBooks, but you can always buy the new MacBook Pro without the Touch Bar. It’s only $2199!“.
My answer to this is that for that amount, I’d be buying a MacBook with the same amount of storage, a slower CPU, less connectivity, less expandability (I currently have a 128GB SD card in the SD card slot to expand my storage economically), and for more than I’d pay for a newly refurbished 2015 MacBook Pro with twice as much storage (see image to the right).
No thanks Apple.
I love your hardware, and I really enjoy the ecosystem you have created. My family is well and truly committed to Apple tech too, but this time around, we’ll be avoiding your new MacBook pros. If there is a new MacBook to be bought, it will be one that we can use now with the peripherals we have now.