Image of Saran Reddy

Saran Reddy

Kafka Solutions Consultant

Big Data Platform Architect

San Francisco Bay Area

Section title

Saran Reddy's Email Addresses & Phone Numbers

Saran Reddy's Work Experience

Williams-Sonoma, Inc.

Kafka Solutions Consultant

February 2019 to April 2020

San Francisco Bay Area

Freddie Mac

AWS Migration Architect/Engineer

March 2019 to July 2019

Richmond, Virginia Area

Barrick Gold Corporation

Cloud Big Data Platform Architect

March 2018 to January 2019

California

Saran Reddy's Education

University of California, San Diego

Computer Science

2012 to 2012

About Saran Reddy's Current Company

Williams-Sonoma, Inc.

Introduced and Impelented Kafka / Prometheus Platform to William Sonoma Technology Stack* Built and Deployed Confluent Kafka 5.1.2 on Kubernetes using Helm Charts on K8* Developed and Deployed Kafka Mirrormaker using Docker images and Deployed on K8 Cluster in production for Cross Data Center Replication* Automating deployment processes using Ansible on Kubernetes and routine support tasks; *Maintained...

About Saran Reddy

📖 Summary

Kafka Solutions Consultant @ Williams-Sonoma, Inc. Introduced and Impelented Kafka / Prometheus Platform to William Sonoma Technology Stack* Built and Deployed Confluent Kafka 5.1.2 on Kubernetes using Helm Charts on K8* Developed and Deployed Kafka Mirrormaker using Docker images and Deployed on K8 Cluster in production for Cross Data Center Replication* Automating deployment processes using Ansible on Kubernetes and routine support tasks; *Maintained multiple Kafka clusters on both physical nodes and K8 orchestrated containers.* Benchmarking the Kafka Clusters.* Ensured to monitor and meet all SLAs around performance, resiliency, and scalability; * Setup Prometheus and Graphana to monitor Multiple Kafka Clusters.* Setup Highly availability of Prometheus servers for no failure scenario across datacenters * JMX Export and Offset Exporters to Prometheus and Graphana for view Dashboard* Log and Container level Monitoring using ELK/Kibana* Provided alerting, technical documentation and operational support to allow the team to identify and resolve incidents with the health of the cluster as quickly as possible; * Developing load and performance plans and scripts in coordination with development teams; * Trained the team on Kafka and Prometheus from start to expertise* Building a close relationship with clients and stakeholders to understand the use case for the platform, and prioritize work accordingly; 
Tools Used: Apache Kafka, Confluent Kafka, Docker, Mirrormaker, Kafka Administration, Kubernetes, Ansible, Prometheus, Elk, Graphana, Kibana From February 2019 to April 2020 (1 year 3 months) San Francisco Bay AreaAWS Migration Architect/Engineer @ Freddie Mac * Migrated HDFS, Hbase, Solr Clusters from On-Prem to AWS Cloud* Deployed AWS Infrastructure using Cloud Formation Templates(CFT) like Ec2, VPC, RDS.* Developed and Deployed Blue Prints to setup HDP Cluster 2.6.2 on AWS Cloud similar to On-Premise Cluster* Migrated and Upgraded Solr (5.1 ->6.6) from On-Premise to AWS Cloud. Migrated Solr collections by backing-up the configs and cores* Migrated Hbase Cluster using export and import Utilities* Migrated Hdfs Data from On-prem to AWS Cloud using Distcp Utility.* Developed Data Validation framework to validate the consistency of data for components HDFS, HIVE, RANGER, ATLAS, SOLR, HBASE.* Performed Benchmarking on the On-Prem Vs. AWS Cloud using YCSB Benchmarking* Provided best practice design recommendations for migrating an application from an on-premise location to the AWS Cloud.* Designed a highly available, scalable cloud infrastructure with performance and uptime as main areas of concern.* Asses cost vs. performance to ensure the application runs smoothly while providing cost savings.* Worked with team members and identified what custom changes might be needed and the level of effort required for migrating the current on-premises application to the AWS cloud From March 2019 to July 2019 (5 months) Richmond, Virginia AreaCloud Big Data Platform Architect @ Barrick Gold Corporation - Manage lead and guide team to deliver the requirements.- Ability to understand and translate customer requirements into technical requirements From March 2018 to January 2019 (11 months) CaliforniaHadoop Security Architect @ Hortonworks Hortonworks Client : Daimler Trucks North AmericaHadoop Security Architect - Configured and Validated Security on already existing clusters.Implemented all aspects of Hadoop security - Authentication, Authorization, Audit, Data Protection, Server hardeningACL hardening Network/LocalLinux Kerberos AD integration, LDAP/SSL/SSL certificates/TLS configuration.Hadoop Cluster architecture with a focus on design optimization and tuning while ensuring solid security design principals. Design and architect bigdata as a service using various technologies on the cloud(AWS,AZURE)Hadoop security configuration/administrationMigrating an On-premises virtual machine to Azure Resource Manager Subscription with Azure Site Recovery. Virtual Machine Backup and Recover from a Recovery Services Vault using Azure PowerShell and Portal. Security Patching on the Azure IAAS VMs through the Shavlik Patching Tool Solutions Consultant responsible to be primary SME on Azure services including DRaas, SaaS, PaaS and IaaS while contributing architecture decisions and tasks for ongoing migration efforts. Manage the Windows Azure infrastructure for our customers depends on their requirement. Worked on Microsoft Azure Storage - Storage accounts, blob storage, managed and unmanaged storages. Design de data model and creation the schema on SQL Azure. Experience with Azure Micro Services, Azure Functions and azure solutions. Experience working on Big data with Azure. Connecting HDInsight to Azure and working on Big Data technology. Experience working on Service fabric and Azure container service. Designed and deployed architecture for Azure Service fabric. Experience with Big Data on Azure - Data lake store, Data Factory. From February 2018 to April 2018 (3 months) Portland, Oregon AreaSr Devops / Big Data Architect - Internet of Things(IoT) @ Equinix * Build Apache Kafka Multinode Cluster, Implemented Kafka Manager to monitor multiple Clusters.* Build Apache Storm Multininode Cluster* Build Apache Cassandra Cluster, Datastax Enterprise Clusters. Enabling and Configuring SOLR.* Build Apache Spark Multinode Cluster. Working on replacing storm with SparkStreaming.* Installing Exhibitor for zookeeper monitoring, Burrow* Performance tuning of Kafka, Storm Clusters. Benchmarking Realt time streams* In-depth knowledge on Architecture, read/write paths, Managing the cluster, Upgrading the Clusters * Automation using Puppet.* Realtime production Monitoring and issue breaking.* POC and evaluation of Blue Prism Automation* Monitoring of server and application performance using tools like Hubble, Splunk, Epic & NmSys * Experience setting up & managing multi-node Hadoop clusters on Azure From April 2016 to February 2017 (11 months) San Francisco Bay AreaBig Data Administrator / Systems Architect @ Hortonworks Hortonworks Client: T-Mobile North America
* Provided hands-on subject matter expertise to build and implement Hadoop-based Big Data solutions.* Research, evaluate, architect, and deploy new tools, frameworks, and patterns to build sustainable Big Data platforms for our clients.* I have designed and implemented complex, highly scalable statistical models and solutions that comply with security requirements.* Identified gaps and opportunities for the improvement of existing client solutions* Interact with, collaborate with, and guide clients, including at the executive level* Defined and developed APIs for integration with various data sources in the enterprise* Actively collaborated with other architects and developers in developing client solutions From November 2014 to April 2015 (6 months) Greater Seattle AreaBigData/Hadoop Administrator @ State Compensation Insurance Fund More focus on the implementation of Security in the Project. There are three different ways of getting inside the cluster. From the Host level, and the Web application levels) . Primarily focused on authorizing and authenticating. Also Implemented Data-in-Rest and Data-in-Motion Encryption and have provided security and Encryption at all levelsEnd-to-End Clusters Implementations for Development, QA and Production Teamed up with Cloud,Security , Release and DatabaseArchitect in designing the company big data cluster architecture with security integration of different components involved with Hadoop Cluster Implementation of End to End Security to comply with the company security policies.Used Gazzang for Data at Rest Encryption. Implemented Ztrustee server and Zncrypt , enabled Process based Encryption.Designed the authorization of access for the Users using SSSD and integrating with ActiveDirectoryIntegrated all the clusters Kerberos with Company’s LDAP/Active Directory, and created USERGROUPS and PERMISSIONS for authorized access in to the cluster.Created Junction Files in Tivoli Access Manager. Integrated HUE/Cloudera Manager with SAML.Collaborated and Guided different teams for successful deployment in to the Production Cluster.Involved in Analyzing system failures, identifying root causes, and recommended course of actions.Involved in Performance testing of the Production Cluster Using TERAZEN, TERASORT, TERAVALIDATE.Used TestDFSIO to validate READ/WRITE performance per second and also match the execution time for different file sizes and block sizesDid Performance Testing by calculating the Resources used on the memory. Have tuned the performance on a timely basis depending on the load of the cluster.Used MySQL to hold all the databases of the cluster. Implemented High Availability of the Database. From June 2014 to November 2014 (6 months) San Francisco Bay AreaSenior Hadoop Consultant @ Cloudwick Monitored workload, job performance and capacity planning using Cloudera ManagerInvolved in Analyzing system failures, identifying root causes, and recommended course of actions.Retrieved data from HDFS into relational databases with Sqoop.Parsed cleansed and mined useful and meaningful data in HDFS using Map-Reduce for further analysisFine tuning hive jobs for optimized performancePartitioned and queried the data in Hive for further analysis by the BI team.Extending the functionality of Hive and Pig with custom UDF’ s and UDAF’s.Involved in extracting the data from various sources into Hadoop HDFS for processing.Wrote pig scripts for advanced analytics on the data for recommendations.Effectively used Sqoop to transfer data between databases and HDFS.Worked on streaming the data into HDFS from web servers using flume.Implemented custom interceptors for flume to filter data and defined channel selectors to multiplex the data into different sinks.Developed Map-Reduce programs to cleanse the data in HDFS obtained from heterogeneous data sources to make it suitable for ingestion into Hive schema for analysis.Implemented complex map reduce programs to perform joins on the Map side using distributed cache.Designed and implemented custom writable, custom input formats, custom partitions and custom comparators.Used Hive data warehouse tool to analyze the unified historic data in HDFS to identify issues and behavioral patterns.The Hive tables created as per requirement were internal or external tables defined with appropriate static and dynamic partitions, intended for efficiency.Implemented UDFS, UDAFS, UDTFS in java for hive to process the data that can’t be performed using Hive inbuilt functions.Used the RegEx, JSON and Avro SerDe’s for serialization and de-serialization packaged with Hive to parse the contents of streamed log data and implemented Hive custom UDF’s.Designed and implemented PIG UDFS for evaluation, filtering, loading and storing of data From November 2013 to June 2014 (8 months) San Francisco Bay AreaHadoop Engineer @ Cloudwick Responsible for building a cluster for Storing 380TB Transactional data with an inflow of 10GB data every day.Performed various configurations, which includes, networking and IPTable, resolving hostnames, user accounts and file permissions, http, ftp, SSH key less login.Implemented authentication service using Kerberos authentication protocol.Performed benchmarking on the Hadoop cluster using different benchmarking mechanisms. Tuned the cluster by COMMISIONING and DECOMMISIONING the Data Nodes.Worked on performing MINOR UPGRADE from CDH3-u4 to CDH3-u6.Upgraded (MAJOR) the Hadoop cluster from cdh3 to cdh4.Deployed HIGH AVAILABILITY on the Hadoop cluster quorum journal nodes.Implemented automatic failover zookeeper and ZOOKEEPER failover controller.Configured GANGLIA which include installing GMOND and GMETAD daemons which collects all the metrics running on the distributed cluster and presents them in real-time dynamic web pages which would further help in debugging and maintenance.Deployed Network file system for Name Node Metadata backup.Performed cluster back using DISTCP, Cloudera manager BDR and parallel ingestion.Designed and allocated HDFS quotas for multiple groups.Configured and deployed hive metastore using MySQL and thrift server.Used hive schema to create relations in pig using Hcatalog.Development of Pig scripts for handling the raw data for analysis.Deployed Sqoop server to perform imports from heterogeneous data sources to HDFS.Configured flume agents to stream log events into HDFS for analysis.Configured Oozie for workflow automation and coordination. Custom monitoring scripts for NAGIOS to monitor the daemons and the cluster status.Custom shell scripts for automating redundant tasks on the cluster. From May 2012 to October 2013 (1 year 6 months) San Francisco Bay AreaSystem Administrator @ j B Group of Educational Institutions Being a Learner at the Institution, was also responsible in handling the a large set of activites Below are few activities.Involved in requirements gathering, designing and developing the applications.Prepared UML diagrams for the project use case.Worked with Java String manipulations, to parse CSV data for applications.Worked with Java database connections to read, write data using Java applications.Developed user interface static and dynamic Web Pages using JSP, HTML and CSS.Worked on JavaScript for data validation on client side.Involved in structuring Wiki and Forums for product documentationInvolved in R&D, set up and designing Mediawiki, Phpbb and Joomla content management systems.Worked on incorporating Ldap service and Single sign on for the CMS web portal.Maintained the customer support portal. Installed Cent OS using Pre-Execution environment boot and Kick start method on multiple servers, remote installation of Linux using PXE boot. Day-to- day – user access, permissions, Installing and Maintaining Linux Servers.Monitoring the System activity, Performance, Resource utilization. Responsible for maintenance Raid-Groups, LUN Assignments as per agreed design documents. Performed all System administration tasks like cron jobs, installing packages, and patches. Extensive use of LVM, creating Volume Groups, Logical volumes. Performed RPM and YUM package installations, patch and other server management. Performed scheduled backup and necessary restoration. Configured Domain Name System (DNS) for hostname to IP resolution Troubleshooting and fixing the issues at User level, System level and Network level by using various tools and utilities. Schedule backup jobs by implementing cron job schedule during non business hours. From 2007 to 2011 (4 years) Hyderabad Area, IndiaSenior Big Data Platform Architect @ Bank of the West The Project is a versatile approach of involving petabytes of data to manage the Bank’s enterprise data. It Combines Descriptive and Contextual Customer View based on internal data. Uses the new data in acquisition, engagement, and attrition analyses to provide insights to improve conversion and engagement.- Introducing and Implementing new technologies in the Bank Technology Stack and adding value and footprint.- Critical in Suggesting/Implementing End to End Big Data Platform Solutions.- Lead a Global Bigdata Operations Team- Train the team in understanding and Implementing Various Big Data technologies - Hadoop Stack- Apache Druid- Apache Superset- ELK Stack - Postgres- Apache Airflow- Hortonworks Data Platform- Hortonworks Data Framework- Dockers and Containerization- Denodo- PCI Compliances Sr. Hadoop / Big Data Architect @ San Diego Supercomputer Center * Applied advanced systems/infrastructure concepts to define, design, and implement highly complex Hadoop systems, services, and technology solutions.* Responsible for implementation and ongoing administration of Hadoop infrastructure and Cassandra infrastructure. * Build Apache Cassandra Cluster, Datastax Enterprise Clusters. Enabling and Configuring SOLR.* Maintaining the Hadoop and Cassandra clusters on a 24/7 basis* Working with data delivery teams to set up new Hadoop users, Cluster maintenance * Screen Hadoop cluster job performances and capacity planning. Monitor Hadoop cluster connectivity and security. Manage and review Hadoop log files. * HDFS support and maintenance. Diligently teaming with the infrastructure, network, database, application, and business intelligence teams to guarantee high data quality and availability. * Lead a team of systems/infrastructure professionals. * Help customer setup Talend Pipelines and ETL pipeline
System and Application Security
* Maintained complex security systems. * Interprets and adopts campus, medical center or Office of the President, system, and regulation-based security policies to control access to networked resources.* Provided recommendations and requirements on network access controls.* Coordinated with Linux support, Developers and principal Data owners to maintain a HIPAA compliant Hadoop environment through access controls such as data encryption at rest and in transit, HDFS access control via Sentry, Kerberos and SSSD and access control to other Hadoop Applications such as Cloudera Manager, Cloudera Navigator, and Hue via LDAP. Responsible for guaranteeing users receive the least privileged access.* ELT ingest activities (using Hive, Sqoop, and other tools), HDFS activities (using HDFS and DISTCP) and development activities (using Java and other too From March 2017 to January 2018 (11 months) Greater San Diego AreaSr Hadoop Administrator @ GE Digital * Manage multiple clusters using Pivotal HD. Manage and Operate Peta Byte Cluster.* Automate the installation of Hadoop using Chef, puppet, ICM-client.* Performance tuning of HDFS, YARN,Talend.* Manage and monitor cluster using Nagios, check_mk. Manage the PCC command center.* Writing Utilities for the big data platform, code review, define standards, best practices, Operations, regular maintenance, software upgrades.* Work on the assigned ServiceNow tickets to resolution with the client * Responsible for managing support cases daily including triage, isolating and diagnosing the problem, ensuring issues are reproducible, and subsequent resolution of the issue.* Coordination between different teams involving release management, configuration management and Change Management.* Responsible for building the knowledge base to prevent recurrence of escalation for a previously resolved case.* Continuous improvement of Incident Handle Times, First Contact Resolution, Escalation Rates, Self-Service/Community Experience* Help customer setup Ecosystem as per their use cases and Debugged Ecosystems issues (HDFS, MapReduce, Yarn, Talend, Kerberos, Sqoop, GemFire XD, Hawq, TABLEAU, Hive, Hbase, Oozie, Flume, Hue, Pig Drill).* Build Apache Kafka Multinode Cluster. Implemented Kafka Manager to monitor multiple Clusters.* Build a Storm Multininode Cluster* Build Datastax Enterprise Clusters. * Build Apache Spark Multinode Cluster. Working on replacing storm with SparkStreaming.* High-availability of all the applications on the production cluster and 24X7 technical support. * Participate in a 24x7 on-call rotation.* Downtime management.* Participate and provide feedback for capacity planning.* Ensuring Support SLAs are met.* Defined processes around Change Management, Release Management, Application Transition across the Hadoop Platform.* Process definitions and ensuring best practices/guidelines are met.* From May 2015 to April 2016 (1 year) San Ramon, California


Saran Reddy’s Personal Email Address, Business Email, and Phone Number

are curated by ContactOut on this page.

Frequently Asked Questions about Saran Reddy

What company does Saran Reddy work for?

Saran Reddy works for Williams-Sonoma, Inc.


What is Saran Reddy's role at Williams-Sonoma, Inc.?

Saran Reddy is Kafka Solutions Consultant


What is Saran Reddy's personal email address?

Saran Reddy's personal email address is s****[email protected]


What is Saran Reddy's business email address?

Saran Reddy's business email addresses are not available


What is Saran Reddy's Phone Number?

Saran Reddy's phone (**) *** *** 309


What industry does Saran Reddy work in?

Saran Reddy works in the Information Technology and Services industry.


Who are Saran Reddy's colleagues?

Saran Reddy's colleagues are Laurence Luz, Dongjoon Hyun, Italo Cocio, David Lee, Liwanshi Raheja, Sourygna Luangsay, Alexander Combs, Jim Davis, Andy Christianson, and John Healy


10x your recruitment & sales conversations

Contact over 200M professionals
instantly by email or phone. Reveal
personal & work email addresses, as
well as phone numbers accurately with
our ContactOut Chrome extension.

In a nutshell

Saran Reddy's Personality Type

Extraversion (E), Intuition (N), Feeling (F), Judging (J)

Average Tenure

1 year(s), 0 month(s)

Saran Reddy's Willingness to Change Jobs

Unlikely

Likely

Open to opportunity?

There's 92% chance that Saran Reddy is seeking for new opportunities

Saran Reddy's Social Media Links

/company/b... /school/uc...
Engage candidates 10x faster

Enjoy unlimited access and discover candidates outside of LinkedIn

one billion email addresses

One billion email addresses and counting

Everything you need to engage with more prospects.

2x More emails
vs. competitors
99% Accuracy
40+ Integrations

ContactOut is used by

76% of Fortune 500 companies

Microsoft Nestle PWC Merck Rackspace
Try ContactOut
for free today
  • 50 contacts/month
  • Works on standard LinkedIn only
  • Work emails, personal emails, mobile numbers
* 1 user per company limit
Try ContactOut for Free