Tuesday, November 29, 2016

Get a Job In Silicon Valley by Playing a Coding Game


What if an online coding game could land you a six figure job in California? Well playing CodeFights could do exactly that. James Johnston discovered CodeFights through a Facebook ad and was intrigued so he started to play. After two nights of coding problems, a pop up message appeared asking Johnston if he was interested in getting a new job. He clicked yes and the next day Tigran Sloyan, founder and CEO of CodeFights, called him and talked about potential jobs. Over the next month Johnston had a dozen interviews and landed a job in Silicon Valley. Johnston went from designing software for orthodontists in Chattanooga, Tennessee to working for Thumbtack, a one billion dollar startup located in Silicon Valley. He even got a stake in the business. 

From launching in 2014 CodeFights has registered five hundred thousand users, just in San Francisco. The twenty best players are given the best opportunities for jobs, but their has still been dozens of players who have landed jobs in the past month alone. Petroff quotes CEO Sloyan saying, "about 20% percent of people who are connected with companies secure a new job"(CNN). However, there is a cost for companies who hire programmers through CodeFights. CodeFights charges companies who hire their player 15% of the annual salary that they plan to pay the new employee. Even though this is a high price to pay, many companies are still interested in investing in the top tech talent, which are the players who are best at the coding games. CodeFights helps build individuals' coding ability and offers new talent to companies hiring software programmers.

Resources:








Friday, November 11, 2016

Creating Wireless Virtual Reality

A new cordless virtual reality device consists of two directional

The biggest issue with current virtual reality headsets is they must be connected to computers and systems that allow the headset to project such high-resolution visuals. The headset is connected to computers by an HDMI cable that is annoying to users as they have to maneuver around it and try not to trip. Recently, researchers at MIT's CSAIL department have worked together to develop the MoVR, a system that allows users to use any virtual reality headset wirelessly. The system works by using millimeter waves, which are high frequency radio signals, to connect to the computer and wifi. Millimeter waves are known to possibly be a part of the futures' amazingly fast smartphones.

Wireless virtual reality headsets are more comfortable for users, but they can't access all the advanced data-processing. In order to project the same high-resolution visuals as a vr headset with a cable input, the wireless system needs data rates of more than six Gbps, which cannot be achieved with any system today. MoVR works with mmWaves, which have been used for things like high speed internet and cancer diagnosis. However, the downside of mmWaves is in order for them to work with virtual reality headsets there must always be a connection between the transmitter and receiver. This connection can be blocked very easily by moving anything between the two. The CSAIL team of researchers found a way around this problem by creating MoVR to act as a programmable mirror that can find the mmWaves signal and reflect it back to the receiver. MoVR is programmed to use angles to accurately reflect the mmWaves signals from the transmitter towards the receiver on the headset. MoVR's are able to find the angles through two antennas, called phased arrays, that focus signals into beams which are sent to the MoVR system.

References:
http://news.mit.edu/2016/enabling-wireless-virtual-reality-1114
http://www.techtimes.com/articles/185580/20161112/htc-vive-virtual-reality-headset-goes-wireless-220-upgrade-kit-now-open-for-preorders.htm

Friday, November 4, 2016

Using Computer Science to Detect Childhood Communication Disorders

Image result for Automated screening for childhood communication disorders

Massachusetts General Hospital's Institute of Health has been working with researchers from the Computer Science department at MIT to create a computer system that automatically determines whether or not a child has a speech or language disorder. It's important to diagnose these disorders at a young age so the children can learn to grow out of the disorder by the time their an adolescent. Unfortunately, sixty percent of children go undiagnosed by the time they reach kindergarten. This system works to diagnose speech and language disorders by analyzing children's audio performances on reading a story. The children watch a series of images and narrative about a story and then they need to tell the story back in their own words. To check how accurate the system was researchers had to, "use a standard measure called area under the curve, which describes the tradeoff between exhaustively identifying members of a population who have a particular disorder, and limiting false positives"(Hardesty). The researchers' performed three tests to find its accurate about eighty percent of the time. In medicine, if the system works more than seventy percent of the time it is considered an accurate test.

Two graduates of MIT, John Guttag and Jen Gong, believed that pauses in children's speech, when they try to complete sentence or remember a word, are sources that help diagnose communication disorders. So they implemented thirteen acoustic features of children's speech into their system to be recognized. Their system recognizes certain patterns of pauses and error in speech that correlate to the communication disorders it can diagnose. Some of the acoustic features it can recognize are length of pauses, short or long pauses, and variability of the length of the pauses. Thomas Campbell, a professor of behavioral and brain sciences at the University of Texas at Dallas says, "The researchers’ automated approach to screening provides an exciting technological advancement that could prove to be a breakthrough in speech and language screening of thousands of young children across the United States"(Hardesty).

Image result for Automated screening for childhood communication disorders




Resources:
Hardesty, Larry. http://news.mit.edu/2016/automated-screening-childhood-communication-disorders-0922
    https://techcrunch.com/2016/09/23/machine-learning-could-automate-screening-kids-for-speech-       and-language-disorders/




Friday, October 28, 2016

Hacking For Good



Image result for tinfoil security


When someone thinks of hacking they usually think about getting robbed of their personal information such as credit card, social security, and any other personal information that the hacker wants. This can also happen to governments and companies who don't have sufficient security for their information databases and software. However, two MIT students developed a way to make hacking a beneficial use for many companies. Michael Borohovski and Ainsley Braun created the fast growing start-up company Tinfoil Security. Tinfoil Security uses commercialized scanning software that uses hacking to detect vulnerabilities in websites and alert developers and engineers to quickly fix the issues before the website goes active. Already, there are thousands of start-ups using the software to develop their website. Braun states that 75 percent of companies that have used the software scanned some form of vulnerability on their website. Tinfoil's website has a ticker showing how many vulnerabilities the software has detected so far and it is currently at 450,000. Braun says the company's number one goal is to secure the internet and end the threat from hackers. 

Tinfoils' software finds vulnerabilities by crawling websites, which is similar to Google. Instead of looking for texts and images, it looks for anywhere it can inject code to exploit vulnerabilities. The software doesn't have access to source code or anything else an external hacker would have, but instead goes through every possible entry point and attempt to see if their's a vulnerability. Currently, the software has techniques to detect 50 different vulnerabilities, including the Open Web Application Security Project’s top ten Web app risks. Every time a vulnerability is discovered the software can run anywhere from ten to a hundred tests. Currently, there are only five employees working at Tinfoil and they are constantly updating the software as new risks and attacks are detected. One of the most common vulnerabilities is insecure cookies. Let's say someone logs onto a website, while on a public wifi hotspot, it's possible for a hacker to steal an insecure cookie allowing them to pretend to be the user. On the user hand, the developer sees a description of the vulnerabilities, including its location and impact on the website, and step-by-step instructions on how to fix the vulnerabilities. The steps include specific programming languages that help fix the vulnerabilities. It's nice to see how individuals are using computer science to counter hackers who are using computer science for unlawful purposes. 


Example of list of vulnerabilities found on a website:

Image result for tinfoil security


Resources:
https://www.tinfoilsecurity.com/about
http://news.mit.edu/2014/tinfoil-security-catches-web-vulnerabilities-0917
https://www.cloudflare.com/apps/tinfoil-security/


Friday, October 21, 2016

LED-filled "Robot Garden" Making Coding More Appealing




Image result for LED-filled “robot garden”

The "robot garden" is dozens of changing LED lights and a hundred or more origami robots that can swim, crawl, and blossom like flowers. It was developed by a team at MIT's Computer Science and Artificial Intelligence Lab. The garden is controlled by any Bluetooth tablet-operated system that illustrates their modern research on varying algorithms through the robotic sheep, origami flowers that can blossom and change colors, and robotic ducks that can change shape when put into an oven. Researchers say the "robot garden" is a visual symbol of their latest work in computing, as well as an artistically appealing way to attract young adults to learn programming.

The system is controlled by simple "control by click" feature or "control by code" feature. "Control by click" feature allows you to control the system by clicking on individual flowers, while "control by code" feature allows you to control the garden by implementing your own commands and programs in real time. Students' ability to see their code in a physical environment causes them to understand how programming is a cool and unique ability to have. The system has sixteen tiles connected via Arduino controllers and programmed through search algorithms that test the space in different ways. One of these algorithms is 'graph-coloring' which ensures no two adjacent tiles share the same color. The garden tests different algorithms for over 100 robots, allowing a lot of experimentation on the system. For example, an MIT researcher developed a system that uses object-recognition algorithms to make robots water, harvest, and take different metrics of a vegetable garden. The "robot garden" is an example of how young students and adults need to experience the real world applications of programming in order to motivate them to understand and appreciate the unique and innovative aspects of coding,


Video Illustrating how it works:





References:
http://cacm.acm.org/news/183473-can-an-led-filled-robot-garden-make-coding-more-accessible/fulltext
http://news.mit.edu/2015/can-led-robot-garden-make-coding-more-accessible-0218
https://blog.adafruit.com/2015/02/23/can-an-led-filled-robot-garden-make-coding-more-accessible-code-robotics-womeninstem/

Friday, October 14, 2016

Solving the Issue of Drug Errors



Image result for Medeye

MIT graduate entrepreneurs Gauti Reynisson and Ívar Helgason worked for hospitals and medicare companies implementing medication safety technologies, when they realized a major health issue. 1.5 million patients in the United States experience prescription medication errors every year due to drug administration mistakes. They decided to return to MIT to find a solution to this health issue and created the MedEye. Advertised and developed by the startup Mint Solutions, MedEye has made it's way to being utilized by hospitals in the Netherlands. It has caught the attention of the medical community and the Dutch discovered ten percent of MedEye's scans caught medication errors. Mint Solutions goal is to aid nurses by selling them the MedEye in order to help them efficiently and correctly administer prescription medication. Currently, Mint Solutions is working with Dutch health care to spread the MedEye to fifteen more hospitals in countries including the UK, Belgium, and Germany.

Image result for Medeye


In order to use the MedEye, a patient must have a wristband with a barcode. The nurse scans the barcode which accesses the patients' medical record. Then the nurse puts the prescribed pills into the MedEye tray. The MedEye uses a small camera to scan the pills in order to analyze their size, shape, color, and markings. Finally, the computer science comes into play when the software distinguishes pills by grouping them in a database through the use of algorithms. What's impressive is the innovation of MedEye's software, which updates and cross-references the results in the patient's medical record. The results are illustrated by color-coded boxes, green means it was correctly prescribed and red means it was wrong or unknown. What makes the MedEye unique, Helgason says it requires no change in a hospitals' workflow or logistics, "it's more usable and accessible in health care facilities"(Stop Drug Errors). It;s great to see how computer science is becoming an important part in the innovation and growth of medicare and the administration of drugs.

References:
http://mintsolutions.eu/medeye-landing-en/#medeye-nurse-1
http://news.mit.edu/2014/startup-stops-drug-errors-0828
http://impressivemagazine.com/2013/11/02/medeye-system-reduces-medication-errors/

Friday, October 7, 2016

Detecting Emotions with Computer Science





In relationships it can sometimes be difficult to interpret what your friend or loved one is truly feeling at any time. Most of our judgements are based off of facial expressions and what they are saying. However, we all know people tend to mask their emotions because they're afraid of what others will think or do so because they want to, for example a poker face. Now with the help of computer science we can uncover the masks of society and find out what people are really feeling. MIT's Computer Science and Artificial Intelligence Laboratory researchers have worked together to create the "EQ-Radio." The EQ-Radio uses wireless signals to detect what someone is really feeling. It can detect whether somebody is happy, sad, excited, or angry by measuring any changes in breathing and heart rhythms. MIT project lead and professor Dina Katabi believes the system will be used in entertainment  and health care across the world. It could also be used to detect the consumer behavior towards a product or business.

The EQ-Radio is unique compared to other technology focusing on detecting emotions. Existing emotion detecting technology systems use audiovisual cues or on-body sensors. Both systems are unreliable because facial expressions can be masked and on-body sensors can be very uncomfortable and innacurate if its constantly moving around. The EQ-Radio uses wireless signals that are sent to someone and then reflected off their body going back to the device. Then the system has programmed algorithms that convert the reflections down into individual heartbeats. The device analyzes these heartbeats to measure levels of arousal and positive effect. These measurements are what give the EQ-Radio the power to detect different emotions. So if someone has low levels of arousal and negative effect then they're sad and if the levels of arousal are high and theres positive effect then they're excited. On the other hand, the EQ-Radio has been tested to only be accurate 87 percent of the  time, so if you have a really good poker face you still might be able to deceive the device.







References:
http://news.mit.edu/2016/detecting-emotions-with-wireless-signals-0920
http://eqradio.csail.mit.edu/
https://www.engadget.com/2016/09/20/eq-radio-wireless-signals-emotion-detector/

Friday, September 30, 2016

Coding Bootcamp

Coding Dojo

What if somebody told you that you can double your annual salary by learning how to code? Then you go home and do research online and notice Coding Dojo offers a fourteen-week course that teaches how to code. If you could double your salary by going back to school for fourteen weeks, wouldn't you? I know I would. Coding Dojo found that 56.5% of its graduates earned less than $35,000 before enrolling. After completing the 14-week course, graduates make an average of $72,221. Mila Wilkinson, 28, said of Coding Dojo, "You're put in an environment that, for a lot of people, is unknown. It was just an intense learning environment, but I loved it because you're surrounded by people who want to learn the same things as you."







A look inside a day at the Coding Dojo.













References:
http://money.cnn.com/2016/09/27/technology/learn-to-code-coding-dojo/index.htmlhttp://money.cnn.com/video/pf/2014/11/14/ivory-tower-moocs.cnnmoney/

Friday, September 23, 2016

Self Driving Cars

Image result for self driving cars

Self Driving cars, or autonomous vehicles, have the capability to sense its environment and navigate without human control. They use GPS, computer vision, odometry, radar, and lidar to detect their surroundings. Autonomous vehicles are able to navigate past, in between, or around cars on the road through control systems that analyze sensory data surrounding them. These vehicles are so popular right now because of all the potential advantages they could provide. Among the possible benefits is the potential reduction of traffic collisions caused by human errors. Other advantages include no age parameters for driving, higher speed limits, smoother rides, less traffic, and it could reduce car theft with the use of voice and fingerprint lock. In order to trust self-driving cars to transport us, we must understand how they operate.
Professor Sebastian Thrun, from Stanford University, guides the Google self-driving car project, and claims the heart of the self-driving car is the laser range finder mounted on the roof of the car. The Velodyne 64-beam laser is allows the car to map its surroundings by producing a 3D map of the surrounding environment. The car's control system uses the Velodyne laser to respect traffic laws and avoid obstacles by integrating the measurements from the laser with high resolution maps of the world to produce data models of the area its driving in. Self-driving cars also have radars on each side on the car, cameras to detect light traffic and pedestrians, and a GPS that determines where the car is going. Tesla recently launched their self-driving ability in their cars and for the most part it's been a success, except for one major accident that still ended up determining it wasn't the self-driving system's fault. I believe this is an important technology for the future that incorporates a lot of computer science programming and coding into these self-driving control systems.



How Google's self-driving cars work.








References:
https://youtu.be/YXylqtEQ0tk
http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/how-google-self-driving-car-works
https://en.wikipedia.org/wiki/Autonomous_car#History
www.digital90210.com


Friday, September 16, 2016

Customizing 3D Printing


A new Web-based interface for design novices allows a wide range of modifications to a basic design — such as a toy car or a black-and-white


The technology behind 3D-printing is becoming more and more popular. Researchers at MIT and the Interdisciplinary Center Herzliya in Israel are working together to improve how time-consuming 3D-printing customization is for beginners. CAD, or computer aided design, applications help first time users customize their product more easily and effectively. These CAD applications convert CAD files into visual models that users can use to design products simply by using the system's operations, and by moving virtual sliders in the application. In high school I used a CAD application called Autodesk that made customizing very simple. You would just draw your image and type in the dimensions, then all you would have to do is print it to the 3D printer. I was able to produce a plastic water bottle.

MIT Researchers and the IDC Herzliya are collaborating to create a CAD system designed for beginners who have no construction. Their goal is to create a system that allows beginners to customize any product using virtual design so they can 3D print it and use it. They named the system "Fab Forms". "Fab Forms" software allows users to design with many different shapes and lines that have a wide range of values for the dimensions of each shape. Then the software calculates the geometries of the design and stores them in a database for the 3D printer. To make sure the dimensions and geometries of the designers' product is correct, the system runs any test chosen by the designer and the new results are stored in the database. Even for an experienced user, the time and effort it takes to construct a product to a specific design could take hundreds of hours. "Fab Forms" cuts down the time it takes to design the product by distributing all the different tasks among the servers in the cloud of the system. Lastly, the system generates a user interface, where all the customization happens. The interface displays what the 3D model of your product looks like as you customize, and has sliders which show the different dimensions of the products design. It's crazy to think about how much programming and coding goes into developing a CAD system this advanced.

Here is video describing the process of the system "Fab Forms":




References:
https://www.youtube.com/watch?v=LVOVmIIbeTY
http://news.mit.edu/2015/customizing-3-d-printing-0903
http://www.3ders.org/articles/20150907-fab-forms-mit-researchers-develop-system-for-customizable-3d-printing-designs.html

Monday, September 12, 2016

Computer Science Saves Children From Heart Surgery




In the EU CARDIOPROOF project, researchers have developed software that allows a computer simulation to observe and analyze parts of a child's heart. When children are diagnosed with a heart defect or disease they must go through a series of exhausting examinations. Then after the examinations they experience the awful treatments and interventions they need to survive. Fraunhofer researchers have worked to together to develop a computer simulation that simulates the effects an intervention or treatment has on a child's heart. The simulation helps determine whether or not the treatment is a necessity, in case a child doesn't need to experience the pain of a long term intervention or operation. The simulation analyzes blood flow and  pressure in vessels.
The computer simulation uses an MRI scanner to take images of the patient's heart. These images allow doctors to analyze the heart's blood flow and the shape and size of the blood vessels. The software in the simulation then calculates the blood flow and pressure in vessels before and after the simulation. Their observations determine what treatments and interventions a child may or may not need. It will save many children from experiencing unnecessary treatment and surgery. Also, parents will save money from all the unnecessary health costs they would have without the simulation. Results show the software reduces the number of complications and follow up treatments a child has to experience. Its great to see the impact computer science has on health, especially the health of children.

References:
https://www.sciencedaily.com/releases/2016/09/160906085157.htm
http://www.cardioproof.eu/about/overview-on-the-project/

Friday, September 2, 2016

Is Super Mario Brothers Hard?



Have you ever played the first Super Mario Brothers? Those who have all know the frustration and anger that comes from repeatedly losing a level. If it makes you feel any better, computer scientists have found that beating a level in Super Mario Brothers is the equivalent to solving some of the hardest problems in computer science. In computer science problems that are this hard are called "NP" problems, "P" problems are easy. Computer Scientists and Mathematicians often converse about solving the general statement that P does not equal NP. If P doesn't equal NP then there is no fast, easy way to solve hard problems. So if P does equal NP that would mean we could solve hard problems a lot faster, and easier.

MIT scientists have been doing research that shows solving a level in Super Mario Brothers is as hard as completing some of the hardest problems in the complexity space PSPACE. PSPACE is the complexity class higher than NP, meaning their even more difficult. Like NP, PSPACE contains challenging problems that take a lot of time and effort to solve compared to P problems . Figuring out how to complete a difficult level of “Super Mario Brothers” takes a long time for beginners and even experienced players who have already completed it. This is because of the difficulty of navigating the level and getting through each checkpoint. Even with the solution to the level it still takes a lot of time to beat it. It's almost unbelievable that such a simple, old game is as difficult as some of the most complex problems in computer science.

References:
http://gizmodo.com/playing-super-mario-brothers-is-like-solving-a-super-ha-1780010492
http://news.mit.edu/2016/mario-brothers-hard-complexity-class-pspace-0601

Wednesday, August 31, 2016

Magic Leap


What is Magic Leap and why might it kill all screens?












Considered to be the future of technology, Magic Leap is an augmented reality company that might change the world. In computer programming, augmented reality is the process of combining video or photographic displays by surfacing the images with computer-generated data. Magic Leap has over $540 million in funding from high end investors such as Google and Qualcomm. These companies believe magic leap could lead to the death of the screen and a new era of gaming. The head worn display works to make the virtual images undetectable between reality and augmented reality.

The tech is called Dynamic Digitised Lightfield Signal (DDLS), which projects images directly into the retina of the eye so it can work to 'trick' the brain into thinking its real. This is the reason why this invention might kill screens which includes tablets, laptops, and tvs. Companies are already working with magic leap so people can watch sports games and movies through them. For example Lucasfilm's ILMxLabs has teamed up with Magic Leap to create a Star Wars clip where you can be placed into an augmented reality with characters from the movie. When Magic Leap finally releases their product or headset, the world of computer science could change instantly. Can you imagine a world of no screens?


References:
https://www.technologyreview.com/s/534971/magic-leap/
http://www.pocket-lint.com/news/135688-what-is-magic-leap-and-why-might-it-kill-all-screens
http://www.theverge.com/circuitbreaker/2016/7/25/12271330/magic-leap-shopping-headset-ar-demo
https://www.britannica.com/technology/augmented-reality