Aided in development of the BLISS variational inference software package to encode astronomical catalogs from deep space images using computer vision. Specific contributions include generalizing the prior distribution over the parameter space (e.g. source density, galaxy morphology, light source fluxes, etc.) to allow for the generation of multi-bandpass images, adjusting the inherited model to encode these newly-generated multi-bandpass images, imbued the package with the ability to align multi-bandpass images relative to a fixed world-coordinate system, implemented several Gaussian Mixture Model priors for generating catalog-specific colors for our randomly-generated light sources, and more. Submissions for publication of works pending. Tools used : Pytorch Lightning, Poetry, Hydra, Python packages (Pytorch, Numpy, scikit-learn, Seaborn, Pandas, OpenCL, Bokeh, etc.), SQL, PyTest, Github Actions, Git, APIs, etc.
Explored the novel HiRes dataset (Liu et al., 2023) to simultaneously profile the transcriptome alongside the 3-D genome using graph neural networks. The HiRes dataset contains simultaneously collected Hi-C and scRNA-seq data collected from adult mouse neurons and murine embryonic cells across various time points during development. This project aimed to profile this dataset, learn a latent representation of the contact matrix data and predict gene expression from known gene newtorks from the genomic contact information. Included data preprocessing of the raw data collected from the Gene Expression Omnibus via JuicerTools at a variety of base pair resolutions, learning a lower-dimensional representation of the Hi-C data at the single-cell and population levels (via t-SNE, spectral embedding) and training a graph convolution network and a graph attention network to predict gene expression of gene networks from the associated Hi-C data.
Tools used: Pytorch geometric, Hicstraw, GEOparse, Numpy, scikit-learn, etc. See slides here and report here.
Built, trained and tested various deep learning architectures (ResNet, VGG16, DenseNet, etc.) via transfer learning for the purpose of 'diagnosing' patients with either COVID-19, viral pneumonia or bacterial pneumonia based on chest x-rays. Project involved use of TensorFlow and NumPy.
Determined the parameters of a hidden dataset drawn from a bivariate normal distribution using expectation-maximization and maximum-likelihood estimation. Performance of either method was compared.
Synthesized the neurophysiological basis of spatial maps distributed across cortical regions into a neurophysiologically-motivated simulation of an agent navigating a familiar environment in two-dimensional space. Neural network model was simulated via MATLAB and consisted of a dynamical system containing rate-coded, Hebbian neural populations, each encoding for a latent variable relevant to spatial navigation. Submission currently under review for publication. Tools used: Matlab, Git
Hebbian neural network simulation of shift property exhibited in retinal cells which contribute to a saturating effect in response. Such a characteristic is useful, for example, when suddenly entering an area of intense light where a more linear pattern would result in diffuse, blurred vision.
Recurrent, shunting neural network that reflects the ability of a small collection of neurons to separate a noisy, diverse collection of inputs.
[Here] Review written for EECS 598: Action & Perception detailing the current understanding of hierarchical organization and processing in the brain and its influence on robotics and computer vision.
[Here] Implementation of an optimization pass written in LLVM for the improvement of unified memory usage in the context of GPU data prefetching.
[Here] A piece of creative writing.