A compensation and benefits platform to help enterprises develop winning pay strategies.
AppDataServices Website
A NextJS application for AppDataServices providers company including displaying of skills and case studies. It also involved deployment to cloud services, analytics, GoogleTag management to track events for contact forms.




Optha Insights
Opthamology Grader offers state-of-the-art Deep Learning models to make Quality Check of images easier and faster by extracting features and results for a number of questions for images. It contains different Features including
It also offers other different image processing techniques for different domains including IOL (Intraocular lens) image registration, retina detection and different other features extraction and finally image registration.
Users can also view different type of image data from different machines in a custom DICOM viewer which allows to read Heidelberg DICOM and E2E files and also Ziess Dicom data along with registration of frames, metadata. It also allows to created Thickness map along with ETDRS grid by extracting OCT layers from Dicom using Deep Learning models.
Blooms App
It is a web based portal where a user can upload a video or audio file from a class room recording or any other session. It is then process by SpeechToText models to get text from that file and then get different insights. Here is list of insights that we get from audio file.
Basic Meta
We extract basic insights from text like sentences and then split into Sentences and questions. We also extract speaker information to get info about speakers and their occurences.
Blooms and Emotions
We have trained a Natural Language Processing algorithm that provides insights for each sentence or a given paragraph that what is users emotion in that text. We also have a model for blooms classification with very good accuracy and can easily extract blooms data. Then we show data to user.
NER
We extract Named Entities from text like names, places and datetime etc and show user an overlayed text where user can toggle between raw and overlayed transcription using NLP.
Other Meta
We also get other metadata like Paragraphs, Summaries, Topics and also display WordCloud with most occuring words for easy understanding of text from speech.
Contentify – A NLP Based Web Portal
AI Paraphraser with shared access and other text processing tools like “Summarization and Grammar Check”

It is a custom web application which allows users to add team mates and perform different text processing and text generation techniques. It contains very powerfull Deep Learning models (Transformers) for text rephrase and generation capabilites. It also allows you to rephrase a single sentence from output, if you want to change a specific line.
It also allows to share text and output with teammates in real time and view output results so that complete work by collaboration.
CXR Processing and Report Generation
X-Ray image processing for Opacity detection, Disease classification and X-Ray report generation from images.
- Opacification Detection from X-Ray images
- Image classification for Pneumonia detection
- Report generation using features from images.
Encampment Detection, tracking and plotting on map
In this project we are working with roadside data captured by GoPro. We are working on street Encampments and people detection on road sides.
For this projects we have mainly 7 classes for detection as follows.
- Small structured Encampments
- Large Structured Encampments
- Small Unstructured Encampments
- Large Unstructured Encampments
- Pile of Debris
- Sitting Person
- Lying Person
We labeled around 8GB of video data for detection of objects and then trained model with that data. Objects from video data are around 23K. We used tensorflow for this project. We trained different models like Faster RCNN and resnet for best speed and accuracy.
After that for inference we worked with object tracking to get accurate no of objects in video. So we can add objects to tracker when it is detected first time. We used different trackers from OpenCV for this.
We then worked with GoPro videos and extracted GPS data from videos using GoLang.
After all this, we created a web application using flask to upload new videos and process using cloud. We used Google Cloud for this purpose and used its services such as Compute Engine, Google Cloud SQL, Google Cloud Storage to store and process data. We used open source ESRI map js plug in to display extracted data.

By Clicking on a point, we can view object on that point and video.

Human body, face and hand pose estimation and keypoint extraction
Here is a demo of project for pose estimation in human body, face and hand and keypoints extraction.
In this project, we worked with body keypoint extraction such as face, hand and complete body data extraction.
Deploy a Flask application on Azure App services using Docker
Flask is a simplest and easy to use micro framework for web application development. It doesn’t require any special libraries to get started. You can create a basic application in a couple of minutes. Here we will create a basic flask web application.
Azure is a cloud platfrom by Microsoft for buliding, create, testing and deploying applications on cloud and has many other features such as databases and many more. It also has Azure App Service and VM services for application deployment. You can also use Kubernetes Engine for deploying application for a larger community. In our tutorial we will using Azure app service. So you need to enable billing on Azure to use azure app service. They also offer a one month trial to get started on account creation.
Docker is a set of platform as service products to help deploy application regardless of an operating system of given environment. We can deliver software using pakages called docker container. Docke container are isolated from one another and can communicate with each other. You can download and install docker for your operating system from docker hub. For this tutorial you need to create an account on Docker hub. https://www.docker.com/products/docker-desktop
Now after installing docker and creating account on Azure and docker hub. We get started with creating a basic flask application. For a flask application you need to install python on your operating system. And to use flask you can install it via command line.
pip install flask
Create a flask application
Open a text/code editor and write this code to create a basic flask application named “main.py”
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello():
return "Hello World!"
if __name__ == '__main__':
app.run(host="0.0.0.0", port=80)
Now to run application on your system type “python main.py” by opening a terminal in current directory and navigate to http://localhost or http://0.0.0.0 to test your application.
Next we need to configure for docker with following configuration. Create a Dockerfile with no extension
FROM tiangolo/uwsgi-nginx-flask:python3.6
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY ./app /app
Create another file requirments.txt in current folder and type flask in it. Now create a new folder in current directory named app and move main.py to it. Your file structure will look like follow.
.
|-app
└─ main.py
|-Dockerfile
|-requirements.txt
Open terminal and type
docker build -t YOUR_DOCKERHUB_USERNAME/PROJECT_NAME .
# For example
docker build -t faizan170/test-app .

First it will download image and install libraries given in requirements.txt file. Then it will create a docker image. When process is complete, to check if application is working. Execute this command and navigate to http://localhost.
docker run -p 80:80 YOUR_REPO_NAME

Now if application is working, push image to docker hub using
docker push YOUR_REPO_NAME
Once you have pushed image to docker hub, navigate to Azure Portal for deployment. https://portal.azure.com. In app services section. Click on create a new app.
In application settings, follow these steps
- Create or select a resource group
- Write a unique name for your application
- In publish section select docker container
- Leave remaining step as default as move to select docker option

Now, in image source, select docker hub and type your docker image name in Image and tag section. Change other options as your requirements, if you want to otherwise click on Review + Create

Once validation is completed. Click on create to create your application. It will take, 2 to 3 minutes and your application will be live. Once this process is completed, navigate to your url to check if application is working.
You caan get complete code for this post from this github link.
https://github.com/ml-blog/flask-application-nginx-docker-demo
Connect custom domain to Google Compute Engine Ubuntu 18.04
Google cloud offers cloud computing sources for deployment of almost all kind of applications to cloud. We can deploy using App Engine, Compute Engine and Kubernetes and Docker. In this tutorial we will learn how to connect a custom domain to Google compute engine.
First you have to have a domain name for registration. There are different sources available for that such as Google Domains, Godaddy, Namecheap and different other sources. We will using google domains for this tutorial. You can use any other domain provider also. Steps are same for all. Register a domain on a domain name provider for next steps.
Next step is to create and connect a google account and create a project of select one on Google Cloud console. Also you need to enable billing for using Google Compute engine. Now, if all steps are done, we are ready to create a new virtual machine in cloud console.
Get to https://console.cloud.google.com/ and select a project.
Next get to Compute Engine section. and click on VM Instances.

In next section Click on create to create a VM instance.
In Machine type select F1 Micro Instance and Then in Boot Disk select Ubuntu > Ubuntu 18.04 with 10GB boot disk will be enough. 1 F1 Micro instance is free for usage in Google cloud, you don’t have to pay for this instance. Also allow HTTP Traffic and HTTPS if you want to use SSL. Once your machine is created, you can view your Externel IP to communicate with Machine. You will use this IP to connect to domain. You can also reserve this IP for yourself, in case you shutdown this machine.

In sidebar, now select network services and select Cloud DNS option. It will auto enable DNS api. We need to Create a record set and add DNS names to connect out domain to This external IP of your virtual machine.

Now select to create a zone. And enter details such as Zone Name and your DNS name(It will be your domain name, for example shadefinder.app. Then click to create.

Click to add a “A” record set. Type your External IP address of Virtual Machine in IPV4 Adress and click to create. It will be used to tell domain where to get where application is running.

Create another CName record. In this record, write www in DNS Name. Change Resource Record type to CNAME and in Conanical name enter your domain name.

Now, you have setup your DNS for server. Its time to add these DNS records to your domain name

It will take a little while to activate your domain with Google Compute engine. After that you are good to go to deploy your application on cloud. View next steps to check how you can deploy your applications such as NodeJS and Flask Python on Compute engine using NGINX. You can also follow Google Cloud docs for more details.