Cover letters are mundane. Record a video resume to show us what makes you tick and show us yourself in action. Or fill out the form below and we will put on our reading glasses.
Senior Big Data Architect
As a Big Data Architect, if you haven’t come across any Data Architect Jobs in Bangalore as per your skills, you have reached your place.
Pattem has Immediate job Openings For Big Data Architect; as the Big Data Architect here, you will be responsible for building and expanding our data pipeline architecture, working on optimizing data flow and collection for cross-functional teams. To be an ideal candidate as a Big Data Architect or Senior Big Data Architect, you must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products.
We are looking for 6 to 10 years of experience, and The Senior Big Data Architect would be responsible for Research and Development.
Big Data Architect Job Description
The Big Data Architect will deeply understand Java, Hadoop / Spark / Storm, and machine learning approaches.
The Hadoop Developer will be an outstanding communicator, creative thinker, and data nerd.
Will be responsible for planning and designing next-generation big-data- system architectures.
Will be responsible for managing the development and deployment of Hadoop applications.
Diligently team with the infrastructure, network, database, application, and business intelligence teams to guarantee high data quality and availability.
Performance tuning of Hadoop clusters and Hadoop Map Reduce routines
High-speed querying Make scalable and highly functional web services to track data
Run real-time data analytics and batch processes
Give professional and technical advice on Big Data concepts and technologies, particularly highlighting the business potential through real-time analysis.
Translate complex technical and functional needs into detailed design
An attitude that ensures safe and secure operations.
A security rest approach to dealing with everything
Should possess good analytical and interpersonal communication skills.
Able to write and communicate effectively.
I was motivated to work in a startup environment.
What do you need to apply?
Hold a Bachelor’s degree in the field of computer science, maths, physics, or similar
6+ years of overall IT experience.
Most of this experience in Java (Core Java, J2EE, Java collection, Java Multithreading)
4+ years of experience in Big Data (Map Reduce, Hive, Pig, Sqoop, Flume, Kafka, Impala, Hadoop, Spark SQL, Spark Streaming, HBase)
Good aptitude in concurrency concepts and multi-threading
Knowledge of schedulers or work own like Oozie
Familiarity with the tools of data loading like Sqoop, Flume, etc.
Hands-on experience in HiveQL
The capacity to write Pig Latin scripts
Performing cluster coordination services via Zookeeper
Excellent working knowledge of Apache Spark / Storm / Flink.
Hadoop Performance Tuning/cloud environment
Working knowledge of Microsoft Azure or Amazon AWS
Implementing Kerberos security on the Hadoop cluster
Commissioning and Decommissioning Hadoop cluster Nodes
Good database knowledge, analytical thinking
Translate, load, and exhibit unrelated data sets in various formats and sources like JSON, text files, Kafka queues, and log data.
Knowledge of Python would be an additional plus.
Broad understanding and experience in real-time analytics, NoSQL data stores, data modeling management, analytical tools, languages, or libraries (e.g., SAS, SPSS, R, Mahout). Strong evidence of a solution/product created ground up.
Run real-time data analytics, Map Reduce, Hive, Pig, Sqoop, Flume, Kafka, Impala, Hadoop, Spark, Knowledge of Python, NoSQL
What You Get
Excellent workplace and colleagues in the IT corridor of Bangalore
Competitive salary at par with the best in the industry
Immense exposure to new technologies
Notice Period & Location
Number of Positions – 7
Notice Period – 2 weeks
Location – Bangalore, Noida, Hyderabad
Drop us a line
Thanks
We’ve received your application and will get back to you as soon as possible.