XGraph is an exciting, venture backed company, working on disruptive and innovative technologies within the fields of sociology, online advertising and distributed computing.
We are looking for an experienced software engineer who will be primarily responsible for:
* Assisting in the development of predictive algorithms,
* Translating and implementing these algorithms in Apache Hadoop, and
* Participating in the release and deployment process.
We are building extremely scalable, data driven systems that capture and handle terabytes of data every day, within an elastic infrastructure deployed in the cloud. As such, we offer the opportunity to work with many cutting edge technologies. Our working environment is ripe for creative thinkers hungry to learn new technologies and tackle complex problems.
Formally, the position requirements are:
* Must have a Bachelor's Degree in Computer Science, Engineering or a related technical field. Master's a plus. Background in Statistics/Linear Algebra a plus.
* Must have expertise in at least one of: Hadoop, machine learning, data mining.
* Must have the ability to learn new languages and platforms extremely quickly, to solve complex engineering problems that no one has solved before.
* Must have superior analytical skills and the ability to write efficient, scalable code.
Additional relevant experience would include:
* Statistical analysis, graph algorithms, information retrieval algorithms.
* Sharded databases and distributed file systems.
* Clustering and classification techniques.
* Knowledge of online ad networks and exchanges.
* Other technologies in our infrastructure: Amazon Web Services, MySQL, memcached, J2EE, Ruby, HBase.
This is a full time salaried position. We offer a competitive compensation package based on experience, including comprehensive benefits like full health, dental and vision to all full time employees. This position will report to our New York City office. Working remotely is not an option.
We are looking for an experienced software engineer who will be primarily responsible for:
* Assisting in the development of predictive algorithms,
* Translating and implementing these algorithms in Apache Hadoop, and
* Participating in the release and deployment process.
We are building extremely scalable, data driven systems that capture and handle terabytes of data every day, within an elastic infrastructure deployed in the cloud. As such, we offer the opportunity to work with many cutting edge technologies. Our working environment is ripe for creative thinkers hungry to learn new technologies and tackle complex problems.
Formally, the position requirements are:
* Must have a Bachelor's Degree in Computer Science, Engineering or a related technical field. Master's a plus. Background in Statistics/Linear Algebra a plus.
* Must have expertise in at least one of: Hadoop, machine learning, data mining.
* Must have the ability to learn new languages and platforms extremely quickly, to solve complex engineering problems that no one has solved before.
* Must have superior analytical skills and the ability to write efficient, scalable code.
Additional relevant experience would include:
* Statistical analysis, graph algorithms, information retrieval algorithms.
* Sharded databases and distributed file systems.
* Clustering and classification techniques.
* Knowledge of online ad networks and exchanges.
* Other technologies in our infrastructure: Amazon Web Services, MySQL, memcached, J2EE, Ruby, HBase.
This is a full time salaried position. We offer a competitive compensation package based on experience, including comprehensive benefits like full health, dental and vision to all full time employees. This position will report to our New York City office. Working remotely is not an option.
Source: Joel On Software