My name is Mark Fajet. I'm a machine learning engineer and software developer currently working for Amazon Web Services as a software development engineer 2 with a passion for mathematics and algorithms. I spend my free time improving my software engineering and machine learning skills, playing guitar, and taking care of my dog and blue-tongued skink.
In 2018, I graduated from Florida International University with a Bachelors of Science in Computer Science and a separate Bachelors of Science in Mathematical Sciences with Summa Cum Laude. I enjoy learning and I enjoy helping others learn as well so, after spending over a year in my current role with AWS, I decided to pursue a Master of Science in Computer Science at Florida International University while continuing to work remotely with AWS.
Now, as I come to the end of my graduate program, I am looking for a role I can be passionate about that blends my computer science and mathematics background to solve machine learning problems efficiently and creatively.
This course provides a general overview of machine learning topics such as supervised vs unsupervised learning, model evaluation, and machine learning algorithms using Python and frameworks such as scikit-learn, SciPy, and NumPy
Group of five courses to teach the foundations of Deep Learning, how to build neural networks, and how to lead successful machine learning projects. The courses go into detail about Convolutional networks, RNNs, LSTM, Adam, Dropout, BatchNorm, and Xavier/He initialization. It includes case studies from healthcare, autonomous driving, sign language reading, music generation, and natural language processing. The courses teach not only the theory, but also how it is applied in industry.
This course teaches the foundations of deep learning, the major technology trends driving Deep Learning, how to build, train and apply fully connected deep neural networks, how to implement efficient (vectorized) neural networks, and describes the key parameters in a neural network's architecture
This course focuses on how to get good results from neural networks. It covers best practices, various algorithms to optimize and improve performance, bias/variance tradeoff, and how to tune hyperparameters in a productive manner.
This course describes how to build a successful machine learning project. The principals and knowledge in this course provide an undesrtanding of how to diagnose errors in a machine learning system, and be able to prioritize the most promising directions for reducing error. It goes over complex ML settings, such as mismatched training/test sets, and comparing to and/or surpassing human-level performance and how to apply end-to-end learning, transfer learning, and multi-task learning
This course is about how to build convolutional neural networks and apply it to image or video data. It provides practice using convolutional neural networks for visual detection and recognition tasks, and using neural style transfer to generate art.
This course demonstrates how to build models for natural language, audio, and other sequence data. It explained Recurrent Neural Networks (RNNs), and commonly-used variants such as GRUs and LSTMs. It has assignments that apply sequence models to natural language problems, including text synthesis, and to audio applications, including speech recognition and music synthesis.
This intermediate-level course introduces the mathematical foundations to derive Principal Component Analysis (PCA), a fundamental dimensionality reduction technique. It covers how to use some basic statistics of data sets, such as mean values and variances, compute distances and angles between vectors using inner products and derive orthogonal projections of data onto lower-dimensional subspaces. Using all these tools, the course shows how to then derive PCA as a method that minimizes the average squared reconstruction error between data points and their reconstruction.
This course covers how to carry out common deep learning workflows such as Image Classification and Object Detection and experiment with data, training parameters, network structure, and other strategies to increase performance and capability.
This course covers how to use CUDA to drastically improve the performance of CPU-only applications written in C/C++ by taking advantage of GPUs. It introduces the platform as well as how to do efficient memory management.
This course covers how to use CUDA in python applications to gain performance improvements by utilizing the GPU. It covered Numba, PyCUDA, and when to use each as well as additional memory management techinques.
This course covers elements of the functional programming style and learn how to apply them usefully in your daily programming tasks using Scala.
This course taught how to apply the functional programming style in the design of larger applications, covering a variety concepts, from lazy evaluation to structuring your libraries using monads. It included graded project from state space exploration to random testing to discrete circuit simulators..
This course taught how to use scala and functional programming to write parallelized algorithms to solve problems efficiently.
You can message me on LinkedIn