Senior Computational Research Scientist @ Ancestry
Computational Research Scientist @ Ancestry DNA
Head of Product @ Enlitic
University of Minnesota-Twin Cities
I'm a computational neuroscientist and data scientist specializing in computer vision, human vision, machine learning, data science, information visualization, and human-computer interaction. My research has contributed to the finding that human brains integrate visual information according to a statistically optimal algorithm, and informed guidelines for the perception and design of 2D and 3D displays. In my spare
I'm a computational neuroscientist and data scientist specializing in computer vision, human vision, machine learning, data science, information visualization, and human-computer interaction. My research has contributed to the finding that human brains integrate visual information according to a statistically optimal algorithm, and informed guidelines for the perception and design of 2D and 3D displays. In my spare time I've moonlighted as a producer of creative music visualization apps with musicians Philip Glass and Björk that have exhibited at the Museum of Modern Art, NY. My goal is to produce radical innovations in user products grounded in rigorous, cutting-edge data science.
Head of Product @ - Enlitic uses recent advances in machine learning and large medical datasets to make medical diagnostics faster, more accurate, and more accessible.
- I oversee the application of deep learning technology to solve high-impact medical problems such as the early detection of cancer.
- I'm always on the look out for interesting datasets, problems, and partners in medicine, specifically related to medical images (CT, MRI, Xray, Pathology, etc). From August 2014 to Present (1 year 3 months) San Francisco, CAData Science Fellow @ - Created http://www.wikiscore.co, a web app that analyzes and rates Wikipedia pages.
- Scraped 100k+ Wikipedia pages using Python and stored results in a MySQL database on AWS.
- Trained a random forest classifier and analyzed Wikipedia’s readability, using Scikit-learn and NLTK.
- Developed the web app using Python, Flask, Twitter Bootstrap, HTML, Jinja2, CSS. From June 2014 to July 2014 (2 months) Palo Alto, CASenior Producer @ - Managed startup team of 6 engineers and designers to create interactive music and visualization apps for major artists including Björk, Philip Glass, Metric, Passion Pit, and Jim Campbell.
- Produced 6 highly-acclaimed apps for iOS, Mac, Android, Windows, LEAP Motion, using iOS, C++, UIKit, OpenGL, Cinder, Objective C. Conceptualized apps, designed user experience designs, spearheaded social integration and real-time app usage analytics, contributed to production code.
- Received coverage from the App Store, WIRED, Rolling Stone, and 2 Webby Honoree Awards, and feature in exhibit at the Museum of Modern Art, NY in 2013.
- Consulted for Disney on 3D visualization app From May 2011 to November 2013 (2 years 7 months) San Francisco, CAPostdoctoral Research Fellow @ - Conducted web experiments to enhance data visualization design for optimal visual understanding, using d3.js, Matlab, Mechanical Turk. From February 2011 to August 2012 (1 year 7 months) Berkeley, CAPostdoctoral Research Scientist @ - Received 3-yr NIH NRSA Fellowship to develop a model to predict biases in visual perception.
- Conducted visual psychophysics experiments in lab resulting in more than 1 million data points.
- Developed probabilistic machine learning model to reverse-engineer the brain’s prior knowledge of edge orientation, using optimization, bootstrapping and model comparisons in Matlab.
- Measured orientation statistics at multiple spatial scales in a large set of photos and discovered close match to human prior knowledge.
- Simulated a neural network that embeds the prior knowledge from image statistics and whose behavior closely matches human perceptual biases, in Matlab.
- Published results in Nature Neuroscience and received press in Science News & NPR. From November 2007 to February 2011 (3 years 4 months) New York, NYPhD Student Researcher @ - Received 4-yr DOE Computational Sciences Fellowship to research the mechanisms of visual perception of images and digital displays.
- Developed probabilistic models of visual perception of 2D and 3D displays, and of sensory integration of multiple sources of information (stereo, perspective, focus, haptics).
- Designed and conducted visual psychophysics experiments in the lab using C, C++, OpenGL.
- Analyzed results and conducted Monte Carlo simulations in Matlab to support hypotheses.
- Published results in Nature Neuroscience, SIGGRAPH and received press in NPR, NY Times. From 2001 to 2007 (6 years) Berkeley, CAResearch Assistant @ - Engineered computer vision / Kalman-filter system to predict road turns for a car-mounted camera.
- Conducted behavioral experiments on human reaching and locomotion behaviors. From 1999 to 2001 (2 years) Cambridge, MAIntern @ - Helped turned now-classic information visualization research prototypes (Hyperbolic Browser, Cone Tree, and Perspective Wall) into consumer software with Xerox PARC spinoff, InXight Software. From 1997 to 1997 (less than a year) Palo Alto, CA
Ph.D., Vision Science @ University of California, Berkeley From 2001 to 2007 MS, Computer Science, minor Cognitive Science @ University of Minnesota-Twin Cities From 1996 to 1999 BS, Computer Science @ University of Minnesota-Twin Cities From 1993 to 1996 Ahna Girshick is skilled in: Computer Vision, Image Processing, Statistics, Machine Learning, Neuroscience, Data Analysis, Computer Graphics, Python, SQL, Matlab, Data Visualization, R, Science, Bayesian statistics, Experimentation, Research, Perception, Experimental Design, Mathematical Modeling, Psychophysics, Data Science, User Research, iOS development, User Interface Design, Cognitive Psychology, Mobile Applications