
Hackerrank Introduces New Benchmark To Assess Advanced AI Models
CUPERTINO, Calif., Feb. 11, 2025 (GLOBE NEWSWIRE) -- HackerRank , the Developer Skills Company, today introduced its new ASTRA Benchmark. ASTRA, which stands for Assessment of Software Tasks in Real-World Applications, is designed to evaluate the capabilities of advanced AI models, such as ChatGPT, Claude or Gemini, to perform tasks across the entire software development lifecycle.
The ASTRA Benchmark consists of multi-file, project-based problems designed to mimic real-world coding tasks. The intent of the HackerRank ASTRA Benchmark is to determine the correctness and consistency of an AI model's coding ability in relation to practical applications.
“With the ASTRA Benchmark, we're setting a new standard for evaluating AI models,” said Vivek Ravisankar, co-founder and CEO of HackerRank.“As software development becomes more human + AI, it's important that we have a very good understanding of the combined abilities. Our experience pioneering the market in assessing software development skills makes us uniquely qualified to assess the abilities of AI models acting as agents for software developers.”
A key highlight from the benchmark showed o1 from OpenAI was the top performer, but Claude- -3.5-sonnet produced more consistent results.
Key features of ASTRA Benchmark include:
- Diverse skill domains: The current version includes 65 project-based coding questions, primarily focused on front-end development. These questions are categorized into 10 primary coding skill domains and 34 subcategories. Multi-file project questions: To mimic real-world development, ASTRA's dataset includes an average of 12 source code and configuration files per question as model inputs. This results in an average of 61 lines of solution code per question. Model correctness and consistency evaluation: To provide a more precise assessment, ASTRA prioritizes comprehensive metrics such as average scores, average pass@1 and median standard deviation. Wide test case coverage: ASTRA's dataset contains an average of 6.7 test cases per question, designed to rigorously evaluate the correctness of implementations. Benchmark Results: For a full report and analysis of the initial benchmark results, please visit .
Ravisankar added,“By open sourcing our ASTRA Benchmark, we're offering the AI community the opportunity to run their models against a high-quality, independent benchmark. This supports the continued advancement of AI while fostering more collaboration and transparency in the AI community to ensure the integrity of new models.”
For more information about HackerRank's ASTRA Benchmark, contact ... .
About HackerRank
HackerRank, the Developer Skills Company, leads the market with over 2,500 customers and a community of over 25 million developers. Having pioneered this space, companies trust HackerRank to help them set up a skills strategy, showcase their brand to developers, implement a skills-based hiring process, and ultimately upskill and certify employees...all driven by AI. Learn more at .


Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.
Most popular stories
Market Research

- Motif AI Enters Phase Two Of Its Growth Cycle
- BTCC Exchange Announces Triple Global Workforce Expansion At TOKEN2049 Singapore To Power Web3 Evolution
- Moonbirds And Azuki IP Coming To Verse8 As AI-Native Game Platform Integrates With Story
- Pepeto Highlights $6.8M Presale Amid Ethereum's Price Moves And Opportunities
- Industry Leader The5ers Expands Funding Programs To U.S. Traders
- Hola Prime Expands Its Platform Ecosystem With Next-Gen Tradelocker
Comments
No comment