top of page
Development Log

11/23 - 12/5  P3 Gold

In the final week of development, our team finished iteration and polish to create our finished product.

​

UI Feedback

In the effort to increase real-time feedback to the user during the speech, this week we implemented volume input UI and iterated the user head level tracking UI. There are now volume meters throughout the map, most notably on either side of the podium, that the user can use to keep a good vocal projection score. By keeping their vocal volume in the green zone, the user will receive a high score. Further, the head tracking UI is now changed from a line tract to a distant crosshair. This was intended to reduce the distraction of the line coming from the users head and keep a sufficient level of immersion.

​

Revamped Tutorial

This week, the tutorial was significantly updated to become more immersive and comprehensible for the user. ​The tutorial automatically plays until the application requires the user to complete an action. For example, throughout the tutorial, the player is asked to use notecards, trigger specific audience reactions, and perform dynamic gestures. When these actions are fulfilled, the tutorial continues. Through this process, the user is guided by a voice over that paces the tutorial and clearly delivers important information.

​

More Audience Behavior Iteration

Our audience manager was also enhanced to maintain a separate list of audience members showing different behaviors. This allowed us to have multiple behaviors active at once in our audience. Further, all audience behaviors were properly linked to the poor behavior the user may be displaying. This includes if the user is looking at audience on one side of the auditorium too long, if the user keeps a low volume for a long time, or if the user has a long pause beyond 3 seconds. In these cases the audience will boo, whisper, or get bored respectively.

​

Performance Feedback Rubric

Instead of directly giving the analytic measurements, we decided to score each user's speech on the criteria of speech flow, vocal projection, dynamic gestures, and audience interaction using an A to F scale. After receiving their scores, the user is able to interact with each category to open a suggestion window with advice on how to either improve their score or reinforce the good habits formed during the speech.

​

Pricing Iteration

Our pricing plan was also updated to a final iteration. Opposed to the two packages we had before, we decided to expand to 4 options that our target audience could choose based on their class/clinic size.

​

11/15 - 11/22  P3 Alpha

This week our team focused on polish and heightening player experience. We built on preexisting features and added new depth to the application.

​

Refining Old Features

The audience AI and head level tracking systems were the main features our team refined this week. For the audience AI, there are now new events that trigger various audience behavior. These events are played throughout the user's speech to enhance player feedback on how their performance is currently scoring. When the user is doing poorly the audience will appear bored and disinterested. When the user is doing well, the audience will appear attentive and may even applaud. ​We also created different materials for audience member's clothes and hair color in order to reduce homogeneity and develop more visual interest.​ Head level tracking was updated to now have targets around audience member heads. Further, depending on the user's behavior, the line trace will now offer appropriate feedback. For example, if the user looks too long at one side of the auditorium, the line trace will turn yellow to indicate that they need to look at the other side of the room.

​

New Areas

Throughout playtests, we noticed that starting the user in the auditorium with no direction was overwhelming and confusing to players. Therefore, this week we decided to compartmentalize the experience and ease the user into the application. Now, the user spawns into a side antechamber where they can choose a preloaded or customizable speech, use the tutorial, or travel to the auditorium to begin the simulation. There is also a new tutorial room where the player is introduced to all features of the application, namely the vocal recognition software, eye tracking, audience, and notecards.

​

Customizable speeches

An exciting new feature of our application are the customizable notecards. Before application start up, the user can upload a .txt file containing their own personalized speech into SpeakVR's "Speeches" folder. Then in the speech selection menu, if the player selects the "custom speech" option, the text from their .txt file will appear on the notecards. We believe that this feature enhances our application's novelty and, in combination with the audience AI, proves the utility of using a VR platform rather than practicing in front of a mirror. 

​

Stakeholders and Pricing

This week we finalized some research on other organizations that have experience and interests in the field of public speaking. We also scheduled an appointment with public speaking professionals Professor Jamie Moshin and Professor Henry Seeger, both of the University of Michigan. This upcoming week we plan to meet with both of them to evaluate our pre-gold application and better its potential social impact. We also reevaluated our pricing of the application. After some discussion, we updated our price ranges and package descriptions to be more reasonable given our target audience.

​

Going Forward

This next week will be all about polish. As our main features are all finally within the application, throughout the final week of development we will clean up any remaining bugs and refine the presentation of our software. Namely, we will work on redesigning the various UI elements to a more aesthetically pleasing look and enhancing the guidance systems. Within the tutorial, we hope to add narration to free the user from having to read the large blocks of text. We also plan to add more visual feedback during the speech to lead the player towards a higher score.

11/8 - 11/14  P3 Milestone 2

This week our team worked primarily on iterating our prototype and further developing our app's key features. 

​

Eye Level Tracking

One of the metrics we decided to use in order to grade the user's speech performance is the amount of time their eyes meet the appropriate eye level. Most public speaking/performance professionals recommend that you keep your eyes above the tops of the audience's heads in the back row while performing on stage. To measure if the user is adhering to this principal, we added a line trace to the headset and targets to the back of the auditorium. This line trace is accurate in representing the angle of the user's head and therefore their eye level. We then added logic to the VRpawn that tracks the amount of time the user spends with their head angled at the targets during a speech and relays it to the analytic manager.

​

Analytics

In order to actually give the performance feedback to the users, we added detailed analytics that are shown after the user completes their speech. To do so, we implemented a start and stop button for the user to toggle once they are ready to begin and end. After deliberation, our team decided that giving the user direct control on when to start and end the analytic software would be the most intuitive and comfortable use of the application. This way the user can decide to quickly end and start over if need be. Currently, the analytic manager tracks the length of the speech and how many times the users eyes meet the appropriate eye level. As we develop more features we hope to record more data via the manager. Once a user concludes their speech by pressing the button, the analytic manager feeds its information to a UI that displays the information until a new speech is started.

​

Other Improvements

This week we also decided to iterate on our speech feedback, speech selection, and audience feedback systems. Instead of having the user's voice directly repeated to them, we implemented spatial audio feedback. This makes the user's voice sound as though it is coming out of the auditorium speakers. We hope this will heighten player immersion by creating a more realistic experience. We also created a speech selection UI menu that appears when the player presses the start speech button. Currently the selection menu only has four options for generic speeches the user can practice. In the future, we hope to implement the ability for the user to insert their own speech. This week we conducted some in depth research on the feasibility of this feature given the VR platform. We have not yet found an intuitive solution but we will continue to research it in the coming week. Finally, we created more behavior for the audience AI to further build on player immersion and filled up the auditorium to create a more realistic crowd

​

Going Forward

In the coming week, we hope to conduct some interviews with stakeholders in order to build more insight into the necessary features of our applications to create the most social impact. We also plan to continue iterating on our auditorium and build a completely immersive environment.

11/1 - 11/7  P3 Milestone

First week of development and Prototyping

 

For this project, our team has decided to create a VR simulation to aid performance anxiety around public speaking. Our application will use audio processing along with motion tracking to provide useful public speaking analytics in order to help a client improve their speech habits. It will be mostly marketed to psychologist and educators who may pay a yearly subscription and/or a one time payment to receive the appropriate hardware plus a one year subscription.

 

In this first week, our team strove to implement some of the most crucial features of the application. After deliberation, we decided that these were a fully modeled auditorium, a speech input manager, customizable notecards, and a reactive audience. The main goal was to create prototypes for each feature that we could build upon throughout the rest of development.

​

Currently our audio input system only tracks when the speaker begins and stops speaking and any volume changes. Going forward we hope to use these features to implement a processor that will alert the user when they are stuttering or speaking at a less than ideal level. In the coming week, we also hope to reach out to some of our prospective stakeholders and begin refining our app to have a larger impact on our target audience.

bottom of page