Sunday 9 June 2013

Week 14 - Final individual Milestone

For the end of our project, the Geriambience team has created and tested a system that is able to track a user through a virtual bathroom using the skeletal tracking function of the Microsoft Kinect. Three example bathrooms have been modeled, replicating real world bathrooms, all being fitted with Caroma products due to their sponsoring of the overall project. The Kinect is able to track a user and represent them in these virtual bathroom environments through orbs relating to their body parts. The interaction with these environments ranges from simple moving systems to smart, gesture based interactions that the user can perform to alter their bathroom.

Group objectives

Our group had three main objectives throughout the course, although one of these was dropped due to time and technology restraints.
1. Model with high detail a series of three bathrooms. Use Caroma products within these bathrooms.
2.Using the code of the Microsoft Kinect, create a series of interactive elements to place into these bathrooms. These elements will use gesture and positional information taken from the Kinect to register the player in the virtual world and interact with it. This is proof of concept work, and has applications in the real world if it were to be taken further.
3. Create a moving mount for the Kinect system so that it can track the user no matter their position in the bathroom. This is the objective that our group had to stop working on. We kept running into problems while working on this section, including the realisation that finishing this system would take up at least the rest of the semester, and the Adruino work we were doing needed us to order a large number of different parts, and would be very unfeasable to actually setup.

Indivudual Milestones
In order to properly meet the group objectives, I needed to set myself a series of personal milestones to meet. These were things that I both wanted to learn, and needed to learn in order to fully acheive our group goals.

1. C++ coding - I have very limited experience with coding, and it has been something I've wanted to learn more about for a long time. The reason I picked this project to work on over the other ones available is because I saw it as a fantastic way to get some practical experience with coding. This urge to learn about it is why it became my major milestone for the semester.
2. Arduino - Linking with my interest in learning the C++ language was my wanting to learn how to use Adruino kits. I've never used anything like them before, and they seemed like a fantastic prototyping and learning tool, so I was very eager to learn how to use them.
3. Project leadership - When we were choosing teams, Laura asked me to take over for her in the role of group leader. I've worked in groups for major projects before as the leader, and found it to be a good experience so I wasn't upset about this. Working within a group is never an easy task, so I wanted to make sure that as group leader I allowed our group to finish the project and deliver what we set out to do.
4. Presentation of work - After the first milestone submission, I realised that my presentation of my work was severely lacking, so I wanted to then put a lot more effort into presenting the work I had been doing. This also translated over into the group wiki, which I then started making sure all group members were looking after and uploading to.

My contributions
Kinect Interactivity:
I have only ever used a Kinect in a developmental sense briefly once, and it was running through Grasshopper and Rhino. Using C++ to alter the code for the Kinect was completely new to me. I had a very basic knowledge of programming, so I understood Stephen Davey's explanation to myself and Matt about how to setup a new CryEngine Flowgraph node and how to write the code to run them. This was the starting point for our development. I had to sit back and let Matt work through the start of developing our gesture detection node, but once I had seen him do some work on it, I was able to take over and contribute my own parts to it.
Our final node looks like this:

This node took part of one Stephen Davey already had written, which was the top section gathering the walk speed, turn amount, jump, leaning and pointing mechanics. We then added on parts that gave us the output information we needed. The positional elements are the ones with the pink boxes for outputs. These give us the vector coordinates for each of the parts of the users body and allowed us to create a tracking skeleton. The blue boxes are booleans, and show whether the Kinect is tracking someone, and whether they have fallen over. While the white second from the bottom is our gesture detection, and counts a value within the C++ code and outputs that as a float.

The first step we took on developing our system was making sure our tracking system was working properly. This video demonstrates the first joint we put in, the hip joint. It tracks the hip joint of the room, and showed us that the orientation of the axis was different to the Kinect than it was in the CryEngine, as Matt's relative distance away from the kinect (Z axis), changed the balls height (Y axis). This was easy to swap around once we realised.
Once we had this first positional data working, we got to work putting the rest of the person in.
We decided to use balls to represent the parts of the body because we couldn't get an actual person in the game to move.
Throughout this part of the development, Stephen suggested that I record some of our work using ChronoLapse, a program that takes screenshots of your computerscreen every X seconds and then stitches them into a video. Here is the first one I took. It shows me doing some work in C++, then compiling. Most of our time in this part of the project was spent compiling.

We had two focuses for our interaction, gesture based systems and positional based ones. For out positional based systems, we started out by looking at altering something based on the relative position of the users hand and the object itself. This video demonstrates this, by resizing a ball based on how close the hand is to it.
This system was the basis for what our moving bench and moving toilet systems were based on.
The next test we ran was the moving toilet one. I set this one up using a sideways door to simulate a bench. During this stage of the development I looked into the ergonomics of seats. Regular seats have a specific ratio of the persons height to seat height to make it optimal for comfort and not putting strain on a person. This translates into a specific bending angle of the knee, which for a regular seat is slightly over 90 degrees. But for a toilet this angle is shortened to just less than 90, optimally around 80 degrees. This angle is taken when the user is sitting comfortably with their feet planted naturally on the floor infront of them. This system demonstrated below takes the leg height of the person and adjusts the seat to match this ratio.

The area of our interaction that I was most interested in was the temperature control. We used a gesture for this one, which was set up so that if your right forearm was horizontal, your left hand then alters a value by being above or below your waist. This initial test took the value and adjusted the scale of an orb, but this was later changed to a block that scaled in height.

This was the extent of our basic work before we began to impliment it with the work of our bathroom design part of the group. Those guys gave us a fully developed level in which we then brought our flowgraph and linked it up to everything we needed to.
The video below is a timelapse of us setting up some of the interactivity in the finished bathroom.
This next video is one I made as a backup for our presentation. It demonstrates all of the interactive elements of the bathroom, and annotates how they work.

Below are images of our final flowgraph. I'll try to explain what each section does, although it may be difficult to tell just from the pictures.
Full flowgraph
 This section takes all of the output coordinates from the Kinect and creates positional vectors from them to input into the balls that represent the player.
creates positional vectors
 This one assigns those vectors to the balls. I laid it out like a person so we could easily determine which entity related to which body part. This actually became very useful later on when we were using these entity names in other parts of the flowgraph.
assigns the entity positions
These next few all relate to the light controls, and their UI messages.

The math behind the light brightness




 This section shows the area of the flowgraph responsible for the gesture controls for the shower temperature and the UI messages.
Shower temp controls and UI.

As well as working on the Aruino and Kinect/Crysis work, I was incharge of creating the documentation for the programming side of things. This involved the screen captures using ChronoLapse, as well as creating the demonstration videos of our work in progress. I filmed Matt interacting with the Kinect on my phone, while using a screen capture program to record CryEngine. I would then edit these two videos together and upload them to my youtube channel so they were available to the group to use. I also took a number of screenshots of the flowgraph and C++ code to share around.
This all culminated in the final demonstration video that I created incase our system failed on the day of presentation (which it did, but that was a CryDev problem not ours so it was postponed). This video demonstrates all of our interactive elements as well as giving an annotated explanation of them all.




 Individual Development
Interaction development:
For this project the interaction was based both in C++ and in the CryEngine Flowgraph, and so it was essential to learn to properly use both for the logic operations we needed to perform. I learned a lot about the C++ language and its implimentation throughout the course. The main thing I learned was the proper way to actually structure and setup a piece of code in order for it to work. This was specifically important for the Kinect project because unless every part of the new node you try to create is structured properly it won't work.
In addition to this, trying to setup the gestures really showed me some new things with coding that I hadn't ever thought about. For example we used a counter within a piece of code to count how long something was true for. We used refresh frames for the kinect system to measure it, and then a large number of "if" and "then" statements to determine what to do afterwards. This ended up making a huge loop that would reset the counter if it was broken. This sort of logic was something I would never have learned if not for trying to do this sort of gesture detection.

In terms of the CryEngine Flowgraph, I already had a fair amount of experience in using it, but never in conjunction with another system like the Kinect. I had never used actual mathematics within the flowgraph, which in this case was essential to get the system to work because we needed to constantly be calculating new variables. For example to add two vectors you need to use pythagoras' theorem to calculate the new values, which ends up a mess of nodes when implimented in flowgraph.

Working with Matt on the coding side of things was brilliant, because of his huge amount of background knowledge with programming. He helped me to learn a lot through this project, and I definitely wouldn't have been able to do anywhere near as much as I did without him. I found it interesting when we got to the Flowgraph side of things because he kept thinking in terms of coding, which doesn't translate very well to the flowgraph even when it makes logical sense. This is because you can't have things like loops and conditions in the flowgraph like you can with a piece of code. I'm very good with logic, so on a number of occassions I was able to think of ways Matt wouldn't have, purely because I wasn't thinking about them in a programming sense, but in a more general way. A good example of this was using the gate nodes to properly trigger booleans.

In this picture, the gate is the one flowing into the large collection of nodes at the bottom. This turned out to be a good work around to some problems we were having with booleans triggering.

Intellectual Property:
 I chose our group's presentation topic for the group, based on the idea that our project was taking a large number of other people's intellectual property and utilising it for our own gains. I thought it woudl be the most fitting subject for our project, and would be interesting to look into how to properly go about using other peoples property, as well as protecting our own. My area of research for the presentation was into the different types of IP that our group was using that belonged to other people, and how to properly go about using them. This had me look at the licencing agreements for the different IP that we were using, and gave me a good idea of how to use them for non financial gain as well as commerically.  This side of the project also became very relevant to me as I had to investigate the same area for another course weeks after, and this gave me some very specific information to take with me to that project.

Collaboration:
 Our course seems to favour group work, so we were no strangers to collaboration, especially in large groups. For this project, we split our group into two smaller groups, myself and Matt working on the programming and interactivity side, while Laura, Dan and Siyan worked on the modelling and visualisation side.
I've worked with Matt on a number of projects before, and knew from the outset that we would work well together. Our main hinderance with the project was that his computer wouldn't run CryEngine because of a windows 8 limitation on the program, and the fact that working on one piece of code on multiple computers would prove impossible. This saw us coming into uni multiple days a week purely to work together on this project. This turned out to be very benefitial to the both of us, because it let us bounce ideas off each other, and when one of us was struggling with a problem the other would usually be able to help solve it. Having the two of us also meant that one person could test the kinect system while the other was editing it, which proved to save a lot of time.

With the group split in two, we had a very disjointed start to the project, because the first part of each groups work was very unrelated to the other. During this time we had very little contact with the other group, which as the group leader I should have rectified much sooner than I did. However, once we reached the stage where we integrated our Kinect interaction with their level, we became a much more collaborative team.
The design side of the group set up a large amount of detail on the wiki which was amazingly useful during the later stages of the project, especially in setting up our final presentation. They had all the plans for the bathrooms which allowed us to map out where to place everything, as well as deciding upon where the kinect and CryEngine camera view should be placed.

What I would do differently
I'll split this section into two. Firstly, what I would do differently as a team leader, and secondly what I would do differently as a member of the team.

As leader, I feel like I should have taken a much bigger role in organising the team at the beginning of the project. If I had done this it could have saved us a lot of time and allowed us to be much more efficient.   I would have set up a very clear list of deliverables that we expected to have accomplished at the end of the project. As it was, our deliverables changed dramatically over the course of the project, possibly because they weren't locked down at the start.

I would have also made a much bigger effort on team cohesion. The way we split the group into two worked well enough, but it left us not knowing what the other side of the group was doing most of the time. If done again, I would spent a lot more time making sure that the entire group was interacting with each other the whole time, possibly by splitting the roles differently. For example we could have all been involved in little parts of each side on a weekly basis. While this might have slowed things down because we would be all over the place, if it was structured strictly it could have made us work better knowing what everyone was up to at all times.

As an individual within the group there are a number of things I would have done differently as well. Firstly, I would have been much more diligent about documenting my progress in the beginning weeks of the project. The first milestone submission was a good kick in the right direction for this and forced me to become much more aware of recording progress, both through video and written.
Secondly, I would have, if possible, done more research and individual work on the C++ side of things. As it was, I learned most of what I did through watching or learning from Matt. It would have been nice to have a chance to delve a little deeper into it all.


Summary
At the end of this project, we have created a very effective and easy to learn system for controlling different aspects of a virtual bathroom. While this project would only serve as a proof of concept, I feel like it definitely proves that this sort of system could work. It is all very well documented on the wiki, and through the personal blogs of the members, which act as a resource for anyone interested in looking into these sorts of systems. I feel like this is a project we would be able to take to a much higher level were we funded by Caroma to actually try and produce these sorts of systems for real life purposes. I can see these sorts of systems being in everybodies homes in 10 or so years, and it would be fantastic to be part of the reason why they are there.






Wednesday 29 May 2013

Week 12

We spent this week getting everything ready for next week's presentations. We did this by finalising all of our interactive elements, as well as preparing for our group presentation.

Lights:
On Russell's suggestion we implimented another gesture based interactive element. The best suited thing was to make the lights based off a gesture, so we set it up so that the lights are adjusted by holding your hands shoulder height and moving them equidistant from your head. Once your hands are no longer at your head height, the system locks the variable to the last set one. We also set the same system up with the mirror lights. Walking up to the mirror switches control over from the main lights to the mirror lights, and the system works in the same way from then on.




Russell also suggested that we add more direct feedback through the UI about what is happening and give some better instructions on how to use everything. I spent most of the studio class setting up these UI systems, and they all work nicely now, giving proper feedback. I had to make them all appear in ordered parts of the screen so they wouldnt overlap eachother if there were more than one being triggered at a time. For next week's presentation we will be coming in the day before to set up an area within the class room to use for our demonstration. We plan to set up a fake bathroom using tables as walls, and masking tape out the different sections of the room, representing things like the bench and shower. This will be the main part of our groups presentation, as we felt that giving a live demo of our system would be more benefitial than spending the entire time talking. Incase the system doesn't work during the presentation I also made a video to demonstrate all of the interactive elements as a backup.



As Matt is always the person in the videos we make (because I don't like being filmed), I've been the one filming them and recording the screen. I then edit these myself using Sony Vegas and overlay the video of Matt onto the screen capture. For these last few videos I also added annotations to explain what is happening.

Wednesday 22 May 2013

Week 11

This was the week in which we combined the seperate work that our group members had been doing into one cohesive project. The team working on the modelling and visualisation side of everything gave Matt and myself a copy of the level, which we began to add in our Kinect interactivity. The initial step for this was choosing the right bathroom to use, as they had modelled three sizes, small medium and large. I decided that the best one to use would be the largest, as it would be a lot easier to use this bathroom as an example for our real world demonstration for our final presentation, and the group agreed.

Myself and Matt spent most of the weeks tutorial, as well as most of the next day working to impliment the interactive elements from the Kinect.

Temperature:
At the moment, the temperature control is the only one we have setup with an actual gesture. Like in our earlier demonstration using a ball as a substitute, the user holds their right arm horizontal to activate the control, then by holding their left hand above or below the waist, they can control the variable. I setup a red rectangular prism in the corner of the shower to represent this variable. I thought this would be the best way to show the changes. Our other option was to increase a steam particle effect, but I thought this would give a much clearer interpretation of the effect of the gesture control. This was the easiest of the interactive elements to connect up because we already had it running earlier, and only needed to change what the variable controlled.



Height control:
The height controls for the sink and toilet turned out to be one of the hardest things we worked on in the entire project. Our earlier creation along these lines was the "bench" (sideways door), that would adjust it's height according to the users knee level was a simplified version of what we set out to create here. The early test had no constraints, but we wanted to put a lot of constraints on these objects. One major concern was a maximum height, which was easy enough to limit in the flowgraph. The other main concern was only activating the controls when the user wanted them to. For this, we decided to use a proximity test. If the user is within a certain distance to the sink or toilet, the flowgraph will activate the part of the graph that controls them, otherwise it remains at it's last set variable. We set this up after Russell had a look at the work and suggested that their constant moving would wear out the parts much quicker in a real life version, so stopping it when it's not needed would save this wear and tear. Additionally, it stops them from moving when for example you bend down to grab something, or the sinks moving when you sit on the toilet.



Light controls:
At the moment we have the lights setup so that they will activate when the kinect recognises someone in the room. This seemed like it would be easy, but turned out to be difficult because of the way the kinect works. Because of it's limited range, it is difficult for it to see someone on the outskirts of the example room from where we have it situated. Also, when the kinect loses sight of someone, who for example has left the room, it leaves the last set of data it registered in the system, so the orbs we use to represent the player stay where they last were. This means that the lights will mostly stay on, even if you leave the room and the Kinect's range.
We also setup controls for the mirror lights. They are setup similarly to the toilet, and will turn on when the player moves near them.

Emergency fall detection:
One of the earliest ideas we had was a test to determine if the user had fallen over, and if so, call an emergency or family line. We found this to be a really important test due to the entire project being aimed at the elderly. One problem we kept running into was that when the user lays down horizontally, the Kinect loses it's ability to properly track them. The coordinates sent to the orbs jump very randomly, and this can sometimes turn off the trigger. We went through about three different tests to get to one that works reliably. It tests to see if the waist height is within a certain limit of the head, over a period of time. This counteracts things like bending down to grab something, which would not be within both the time and position limits.



This week I also created a number of videos. I very much dislike being filmed so I had Matt demonstrate our work, while I filmed it. I then edited it together, using both a video I took of him demonstrating and a screen capture I took of the interactivity working in CryEngine. I synched these and edited them, then uploaded them to youtube.

Tuesday 21 May 2013

Renumeration presentation review - Vivid group

The Vivid group were the last group to do their presentation, which was on the topic of renumeration. I have very mixed feedback for this group because half of the group presented well and half not so well. The ones that presented well spoke very clearly and were as engaging as you can get for a presentation on renumeration, although at times they did speak in too much detail, taking up far too much time. The other side of the group gave off the feeling that they were very unsure of what they were talking about, and provided far too little detail. The half of the group that spoke well did so by reading their notes to the audience, rather than just reading their notes. They were able to elaborate and properly explain things rather than just reading a page full of large words to impress us like some other presentations have done. The other half of the group spent most of the time reading their notes off the page and didn't elaborate on anything. The written part of the presentation definitely had thought put into it's structure, as each section flowed pretty well onto the next. It's only flaw was that it sometimes had far too much on the screen at once, so it became a little difficult to take it all in at once. There was also a lot of jargon that was thrown around without being properly explained to people that hadn't researched the topic for a presentation. The examples they provided were very well explained. Especially the case of using Pat's real world pay slips to demonstrate what they were talking about. One thing I would have liked to see was their project being reflected in the presentation. Their project is the only real world project with a budget and an end goal, so it would have been nice to see some of that come out in the presentation for relevance.

Tuesday 14 May 2013

Group Presentations - Conflict

DCLD: DCLD were the first out of the two groups to give their conflict presentation. Overall, the presentation was very full of information, but it's main downside was that it had far too much text on the screen. This was a common factor amongst all of the group members sections. The written part of the presentation had lots of details, but in some cases this caused a problem as they were expecting to get too much information across at some points. They also had an issue with using too many lists. On a majority of slides, all the information would be listed, in full detail, and they would simply run through this series of dot points one by one. This gave the presentation a very disjointed feel as they bumped from one topic to the next without any kind of transition. The oral side of the presentation suffered due to the nature of their slides. They had all the information they wanted to communicate to the audience written down on the slides, and then simply read off them. A lot of the time I found myself reading the entire slide quickly then losing attention to what was being spoken about because I had just read it. A more affective way of presenting would have been to have the dot points on screen contain only snippets of the information, or headings, which would then be elaborated on in the spoken component. There was also a sense that the group wasn't particularly organised for their presentation. I got this from the way they changed between the person who was speaking. They would throw the presentation over to someone else who seemed like they didn't know where to pick up. This made each section of the presentation seem like it was disconnected to the rest. The images they used were pretty decent, although a bit small or full of words sometimes. The flowgraphs seemed like they were very relevant, although it was hard to make out the details due to their size. As far as I could tell, all the images were referenced, although again they were too small to read. These problems would be negated if I had a copy of the presenation, but being in the audience as it is delivered, you cannot make out the details. Kinecting the boxes: Kinecting the boxes presented second on the day. The written side of the presentation was well done. Like the last group they used plenty of lists, but not quite to the same extent. There was definitely more of a flow between the topics, so it seems like they did think through the flow of their presentation and how one person would hand onto the next and link their subject matter. They used plenty of examples to demonstrate what they were talking about, although they were somewhat vague and unrelated to their groupwork. While this isn't wrong, it would have been nice to hear about some conflict resolution that had happened within their own group. In terms of their oral presentation, it was significantly better than the DCLD groups. They read off notes rather than the slides, which meant the audience had to listen to the presentation to gain the knowledge. They did somewhat fail to engage the audience though, as they were merely reading and not talking to us. Like the previous group, the images they used contained far too much text, and in some cases, like the flowgraph, were too small to read and took far too long to explain. I found myself losing some interest during these long explanations. They explained their topic well, but it is hard to tell if they understood what they were telling us or simply reading if off the paper infront of them.

Monday 13 May 2013

Kinect interactivity update

This week we've been working on creating more interactivity with the Kinect into CryEngine. We've done a lot of coding in C++ to look at skeletal positions and output these into the game engine. Using the flowgraph we can then use these vectors to interact with the game world.




 This video shows the initial gesture control we've set up. By holding your right arm horizontal you activate the flowgraph, which then determines whether your left arm is above or below your hip. By raising or lowering it you can alter a value that will later be used to change water temperatures in the shower or bath. It could also be used to dim lights or dynamically change other variables in the bathroom.





This  demonstration shows a proximity test. There is a radius around the ball that when inside this, the ball will change sizes depending on how close your hand is to it. This can be used to set lights on and off by moving your hand near them, or to taps on and off.


This test calculates the distance between your foot and knee then scales the height of a seat or bench depending on that variable. This is used to alter the height of seats and benches to better suit the ergonomics of the user. I investigated into proper ergonomics of seats to determine that proper comfortable seating leaves the user with a bent knee just above a 90 degree angle. This calculation should ensure that is the case no matter who sits on the seat.

Monday 6 May 2013

Milestone followup

As an extention to the milestone post, here are example of the arduino kit working as well as the kinect interacting with CryEngine 3.

Simple Motor:
This is a simple motor being turned on and spinning at a set speed. This is a very basic form of how our Kinect rig would spin and move. By altering the variables in the code for how fast it spins we can control the amount of rotation, and by linking the movement up to a different input variable we can determine when to move it.
A copy of the code to create this can be found here.

 Light turning on and off:
 This example shows a light being turned on and off by buttons. It demonstrates the ability to toggle one aspect of the arduino with an input other than just power. In this case it's a switch toggling on then off, for our project if could be input from a an aspect of the kinect sensor telling the arduino to act.
A copy of the code to create this can be found here. This code comes from a basic code that keeps the LED on, and is modified to allow for the button to turn it on and off.

LED dimming:
This one is essentially the same as the last, although the loop part of the code is altered to constantly increase or decrease the brightness of the LED depending on whether the on or off button is pushed.

Light array:
The light array code turns all of the lights on at the same time, while applying an increasing delay on them as it moves along the array, making it look like the light is moving across them all. The notion of a delay in code would be useful to our project both in moving the Kinect, as well as interacting with the Crysis environment through the Kinect, as some interaction would be useful with a slight delay, such as positioning different things in the bathroom to suit the occupant.
The code can be found here.

 Potentiometer:
 A potentiometer is a resistor that, in this case, connects to 5 volts actoss its three pins, and will read a value between 0 and 5 volts depending on the angle it is turned. In this setup, the value is stored and used to determine the speed at which the light turns on and off. The application of this in our project is that it demonstrates how to gather an input, store it's data and then use that to change an output.
The code can be found here.