Staff DevOps Engineer @ Adobe San Francisco Bay AreaSr. DevOps Engineer @ Intapp Working part of DevOps team supporting multiple products hosted in AWS-Using technologies like Terraform, Cloudformation, Docker, ECS, EBT, Docker Cloud, Python, AWS SDK,Packer-Managing the Microservices Infrastructure hosted on AWS ECS and Configured via TerraformMonitoring/Search Tools like New Relic, DataDog, ELK Stack-Jenkins CI/CD- Implement DR...
Staff DevOps Engineer @ Adobe San Francisco Bay AreaSr. DevOps Engineer @ Intapp Working part of DevOps team supporting multiple products hosted in AWS-Using technologies like Terraform, Cloudformation, Docker, ECS, EBT, Docker Cloud, Python, AWS SDK,Packer-Managing the Microservices Infrastructure hosted on AWS ECS and Configured via TerraformMonitoring/Search Tools like New Relic, DataDog, ELK Stack-Jenkins CI/CD- Implement DR strategy for RDS tier using native AWS Services (Read Replicas, DMS Service)- Github for Source Code Repository (Managing the entire Git Flow with tight integration with Slack, Jenkins) From November 2016 to June 2017 (8 months) Palo AltoSr. DevOps Engineer @ Financial Engines Sr. DevOps Engineer - Oct 2015 - Nov 2016Part of the Cloud Engineering team at Financial Engines doing DevOps activities and helping the team. - Took lead in Compute Infrastructure tasks on On-premise and on our Public Cloud- Helped build our Configuration Management pipeline using Puppet Enterprise- Helped Build Monitoring and Business SLA using AppDynamics, CloudWatch and other APM tools- Leading DevOps Charter and Infrastructure-as-a-Code journey at Financial EnginesInfrastructure Engineer - Oct 2013 - Oct 2015Working as Infrastructure Engineer in Infrastructure team under Technology Services Department. Reporting to Director of Infrastructure. We as a team our responsible for providing robust and secure Infrastructure to IT and Technical Operations group. My main goals and responsibilities are as follows:1) Managing the Compute Infrastructure for the Organization.We rely on VMware and AWS for our Compute Infrastructure.2) Provide Design and Implementation Support for new Compute and Storage Initiatives for our Customers3) Ensuring Secure and Scalable Infrastructure deployments for the Organization. We rely on Lennovo, Cisco UCS Servers.4) Supporting the Storage Infrastructure for the Organization by performing Storage Administration tasks. We primarily use NetApp. From October 2013 to November 2016 (3 years 2 months) Sunnyvale, CACustomer Support Intern @ RedSeal Networks Worked with the Customer Support Team at Redseal Networks. Below is the overview of my role and responsibilities at Redseal Networks. • Manage virtual and physical lab infrastructure for the team and fulfill external (Customers) and internal requests.• Deploying VMs, Managing the virtual environment (VMware and Hyper-V) by installing VMware tools on servers, and cloning VMs.• Created backup strategy for the virtual environment and deployed it using VMware.• Used HTML, CSS, and PHP to maintain the support website. • Compile and Manage technical documentation for resolved cases/issues.• Testing the Redseal product in our environment and providing work-around for known bugs. From June 2013 to August 2013 (3 months) Engineering Intern @ Hitachi Data Systems -Working with Technical Operations under Global Services Engineering (GSE) at Hitachi Data Systems. My tasks in a nut shell include the following:1) Help support groups their production environment by fulfilling requirements for thier projects via the ticketing systems.2) Worked on installing operations systems on physical and virtual machines ( Windows, RHEL, Orcale Linux, SUSE Linux, Debain Linux )3) Deploying VMs, Managing the virtual environment by installing VMware tools on servers, cloning VMs, and backing up the VMs.4) Compile and Manage technical documentation for complex projects.5) Worked with Brocade switches and connecting it to the SAN.6) Working on Storage. From June 2012 to May 2013 (1 year) Santa Clara, CASr. DevOps Engineer @ Adobe Working as a core member of DevOps Search & Sensei Team at Adobe. Fast-paced, start-up like work culture within the team. More "DEV LESS OPS" motto for the team.Responsible for:- Design, Architect, Implement and Support Big Data Infrastructure on AWS and Azure cloud.- BIG Data Tech Stack Used: Apache Hadoop, Apache HBase, Apache Spark, Apache Storm, Elasticsearch- AWS technologies: EC2, VPC, S3, R53, EMR, ASG, ELB, SNS, Lambda, Cloudformation , Cloudwatch (Many more)- Azure technologies: Azure VMs, Blob Storage, HDInsights, AppGateways, Arm templates, Scale Sets, Availibility Sets (many more)- DevOps Tools - CHEF, Jenkins, GIT, Artifactory, Docker- Programming Languages - Bash, Python, Ruby- Indexing Pipeline handling 50,000 events/seconds- Tuning JVMs, Configurations for Hadoop, Storm and Elasticsearch for maximum throughput. San Jose, CA
Staff DevOps Engineer
San Francisco Bay Area
Sr. DevOps Engineer
November 2016 to June 2017
Sr. DevOps Engineer
October 2013 to November 2016
Customer Support Intern
June 2013 to August 2013
Hitachi Data Systems
June 2012 to May 2013
Santa Clara, CA
Sr. DevOps Engineer
San Jose, CA
What company does Archit Kalra work for?
Archit Kalra works for Adobe
What is Archit Kalra's role at Adobe?
Archit Kalra is Staff DevOps Engineer
What industry does Archit Kalra work in?
Archit Kalra works in the Computer Software industry.
Enjoy unlimited access and discover candidates outside of LinkedIn
One billion email addresses and counting
Everything you need to engage with more prospects.
ContactOut is used by
76% of Fortune 500 companies