PIET Official Page on Facebook

  • Toll-Free: 1800 120 6884
  • Grievance Redressal Portal
  • Community Support

PIET

  • Vidyapeeth Education Trust
  • Vision and Mission
  • Governing Body
  • Mandatory Disclosure
  • Analysis Report of Fee Committee for Suggestions
  • Awards & Achievements
  • Government Affiliations
  • MOU’s & Collaborations
  • B.Tech. CSE
  • B.Tech. CSE (AI & ML)
  • B.Tech. CSE (AI & DS)
  • B.Tech. CSE (Cyber Security)
  • B.Tech. HONS CSE
  • B.Tech. ECE
  • B.Tech. HONS ECE
  • B.Tech. HONS CE
  • B.Tech. HONS ME
  • B.Tech. HONS IT
  • B.Tech. Textile Engg.
  • About Us (CSE)
  • Vision & Mission
  • Message from HOD
  • Program Educational Objectives
  • Program Specific Outcomes
  • Program Outcomes (POs)

CO-PO Mapping

  • Consultancy, Research Publications
  • Laboratories
  • Department Activities
  • Department Conducted Conferences / FDPs / Seminars / Guest Lecture / Workshops
  • Placement Record
  • Student Merit
  • Department Achievements
  • Clubs And Societies
  • CSE Digital Library
  • E-Magazine & E-Newsletter
  • IEEE PIET Student Branch
  • About us-Department of Computer Science & Engineering (AI & DS)
  • Vision & Mission and Program Educational Objectives
  • Program Outcomes and Program Specific Outcomes
  • Student Evaluation and OBE
  • Curriculum, Resources and Pedagogy
  • Time Table & Lecture Plan
  • Consultancies, Innovations and Research Publications
  • Conferences / FDPs / Seminars / Workshops / Industry Visits Conducted
  • Achievements
  • Student Skill Courses
  • Infomaniac Club
  • About Us (Department of Cyber Security)
  • About Us (Department Of Civil Engineering)
  • Research Publications
  • Notable Alumni
  • About Us (Department of Electronics & Communication Engineering)
  • Program Outcomes (POs) and Program Specific Outcomes
  • Placements and Alumni Connect
  • NBA Certificate
  • About Us (Department of Information Technology)
  • Innovations in Teaching & Learning
  • Student Trainings, Placements & Alumni Connect
  • About Us (Department of Mechanical Engineering)
  • Department Achievement
  • Department Conducted Conferences / FDPs / Seminars / Guest Lectures / Workshops
  • About Us (Department of Textile Engineering)
  • Lecture Plan
  • About Us (Department of Applied Sciences and Humanities)
  • Scheme & Syllabus
  • Co Po Mapping
  • E Magazines and News Letter
  • Lesson Plans
  • About Us (Department Of Management Studies)
  • Program Outcomes
  • Department Activities- Conferences / FDPs / Seminars /Guest Lectures / Workshops
  • About Us (Computer Applications)
  • Departmental Activities
  • Research & Publications
  • Seminar/Workshop/Faculty Development program/ Short term courses
  • About Us (Department of Business Studies)
  • About Us (Diploma in ME)
  • About Us (Department of Pharmacy)
  • Message from Principal
  • Approvals & Affiliations
  • Students Admitted
  • Programmes Offered
  • Admission Process
  • PIET Scholarship Schemes
  • PMSSS Scholarship Scheme
  • AICTE Pragati & Saksham Scheme
  • SC/ST/OBC Scholarship Schemes
  • Fee Structure
  • Payment Procedure
  • Reservation of Seats & Fee Refund Policy
  • Admission Offices
  • Orientation Programme
  • Induction Program
  • PIET Carbuncle
  • PIET Maestros
  • PIET Convocation
  • Student Clubs and Societies
  • Extension Activities
  • Hostels & Dining
  • Open Air Theatre
  • Seminar Halls
  • ICT Facilities
  • Wifi-Campus/e-classrooms
  • Internet Cafe/Computing Centers
  • World Class Cafeteria
  • Bank and ATM
  • Stationery Shop
  • Disabled-Friendly, Barrier Free Environment
  • Lead Workshop
  • Blended Learning & Flipped Learning
  • Learning Management System through M-TUTOR
  • Lecture Capture System (LCS) through Impartus
  • AI Evaluation System
  • Value Added Courses
  • International Internships/Student Exchange Programs
  • International Academic Tie-Ups
  • Visiting International Faculty, Researchers and Expert Speakers
  • Project Based Learning
  • Placements @ PIET
  • Why recruit from PIET?
  • Placement Patrons
  • Student’s Speak
  • Student Training & Placement Policy
  • Central Training Cell
  • Contact Placement Cell
  • Assured Placement Program
  • Innovation Ecosystem
  • Institute Innovation Council (IIC)
  • MHRD E-Sessions
  • Intellectual Property Rights (IPR) Policy
  • Entrepreneurship Development Cell
  • Student Achievements
  • Research & Development
  • Research & Development Policy
  • Consultancy Policy
  • Publications
  • Collaborations
  • Startup & Incubation
  • Photo Gallery
  • NAAC Criteria
  • About us (Department of Computer Science & Engineering (AI-ML))
  • Department Clubs
  • Seminar/Workshop/ Faculty Development program/ Short term courses
  • COs & CO-PO Mapping
  • E-Magazines and Newsletters
  • Academic Calendar
  • grayQuest (Easy EMI Option)
  • Reservation of Seats and Fee Refund Policy
  • Visit The Campus
  • Schools & Colleges
  • Core Committee
  • Rounds Details
  • Green Concept
  • Green Campus Policy
  • Energy conservation & Renewable Energy
  • Water Conservation
  • Waste Management
  • Energy, Environment, Green Audits & Awards
  • Green Campus Initiatives
  • Environment Promotion Activities
  • Education 4.0 & Flipped Learning
  • Events Reports
  • Placement Record (Last 3 years)
  • Startup and Incubation
  • Placement Updates
  • Central Library
  • Online Fee Payment
  • Alumni Association
  • Computer Science & Engineering
  • ACM Student Chapter
  • Lecture Plan 2019-20
  • Time Table 2019-20

B.Tech. (CSE) CO-PO MAPPING

4th semester, 5th semester, 6th semester, 7th semester, 8th semester.

MapCoder : Multi-Agent Code Generation for Competitive Problem Solving

Code synthesis, which requires a deep understanding of complex natural language (NL) problem descriptions, generation of code instructions for complex algorithms and data structures, and the successful execution of comprehensive unit tests, presents a significant challenge. Thus, while large language models (LLMs) demonstrate impressive proficiency in natural language processing (NLP), their performance in code generation tasks remains limited. In this paper, we introduce a new approach to code generation tasks leveraging the multi-agent prompting that uniquely replicates the full cycle of program synthesis as observed in human developers. Our framework, MapCoder , consists of four LLM agents specifically designed to emulate the stages of this cycle: recalling relevant examples, planning, code generation, and debugging. After conducting thorough experiments, with multiple LLMs ablations and analyses across eight challenging competitive problem-solving and program synthesis benchmarks— MapCoder  showcases remarkable code generation capabilities, achieving their new state-of-the-art (pass@1) results—(HumanEval 93.9% , MBPP 83.1% , APPS 22.0% , CodeContests 28.5% , and xCodeEval 45.3% ). Moreover, our method consistently delivers superior performance across various programming languages and varying problem difficulties. We open-source our framework at https://github.com/Md-Ashraful-Pramanik/MapCoder .

Md. Ashraful Islam 1 ,  Mohammed Eunus Ali 1 ,  Md Rizwan Parvez 2 1 Bangladesh University of Engineering and Technology 2 Qatar Computing Research Institute (QCRI) {mdashrafulpramanic, mohammed.eunus.ali}@gmail.com, [email protected]

1 Introduction

Computer Programming has emerged as an ubiquitous problem-solving tool that brings tremendous benefits to every aspects of our life Li et al. ( 2022a ); Parvez et al. ( 2018 ); Knuth ( 1992 ) . To maximize programmers’ productivity, and enhance accessibility, automation in program synthesis is paramount. With the growth of LLMs, significant advancements have been made in program synthesis—driving us in an era where we can generate fully executable code, requiring no human intervention  Chowdhery et al. ( 2022 ); Nijkamp et al. ( 2022 ) .

Despite LLMs’ initial success and the scaling up of model size and data, many of these models still struggle to perform well on complex problem-solving tasks, especially in competitive programming problems Austin et al. ( 2021 ) . To mitigate this gap, in this paper, we introduce MapCoder : a M ulti- A gent P rompting Based Code Generation approach that can seamlessly synthesize solutions for competition-level programming problems.

Competitive programming or competition-level code generation, often regarded as the pinnacle of problem-solving, is an challenging task. It requires a deep comprehension of NL problem descriptions, multi-step complex reasoning beyond mere memorization, excellence in algorithms and data structures, and the capability to generate substantial code that produces desired outputs aligned with comprehensive test cases Khan et al. ( 2023 ) .

Refer to caption

Early approaches utilizing LLMs for code generation employ a direct prompting approach, where LLMs generate code directly from problem descriptions and sample I/O Chen et al. ( 2021a ) . Recent methods like chain-of-thought (Wei et al., 2022a ) advocates modular or pseudo code-based generation to enhance planning and reduce errors, while retrieval-based approaches such as Parvez et al. ( 2021 ) leverage relevant problems and solutions to guide LLMs’ code generations. However, gains in such approaches remains limited in such a complex task like code generation where LLMs’ generated code often fails to pass the test cases and they do not feature bug-fixing schema Ridnik et al. ( 2024 ) .

A promising solution to the above challenge is self-reflection  Shinn et al. ( 2023 ); Chen et al. ( 2022 ) , which iteratively evaluates the generated code against test cases, reflects on mistakes and modifies accordingly. However, such approaches have limitations too. Firstly, while previous studies indicate that superior problem-solving capabilities are attained when using in-context exemplars (Shum et al., 2023 ; Zhang et al., 2022 ; Wei et al., 2022a ) or plans (Jiang et al., 2023b ) , these approaches, during both code generation and debugging, only leverage the problem description itself in a zero-shot manner. Consequently, their gains can be limited.

To confront the above challenge, we develop MapCoder  augmenting the generation procedure with possible auxiliary supervision. We draw inspiration from human programmers, and how they use various signals/feedback while programming. The human problem-solving cycle involves recalling past solutions, planning, code writing, and debugging. MapCoder  imitates these steps using LLM agents - retrieval, planning, coding, and debugging. In contrast to relying on human annotated examples, or external code retrieval models, we empower our retrieval agent to autonomously retrieve relevant problems itself Yasunaga et al. ( 2023 ) . Moreover, we design a novel structured pipeline schema that intelligently cascades the LLM agents and incorporates a dynamic iteration protocol to enhance the generation procedure at every step. Figure 1 shows an overview of our approach, MapCoder  .

Additionally, existing iterative self-reflection methods rely on extra test cases generated by LLM agents (e.g., AgentCoder Huang et al. ( 2023 ) , LATS Zhou et al. ( 2023 ) , self-reflection Shinn et al. ( 2023 ) ) or external tools, compounding the challenges. Test case generation is equally challenging as code generation Pacheco et al. ( 2007 ) , and incorrect test cases can lead to erroneous code. Blindly editing code based on these test cases can undermine problem-solving capabilities. For instance, while self-reflection boosts GPT-4’s performance on the HumanEval dataset, it drops by 3% on the MBPP dataset Shinn et al. ( 2023 ) . Upon identification, to validate this, on the HumanEval dataset itself, we replace their GPT-4 with ChatGPT, and see that model performance drops by 26.3%. Therefore, our debugging agent performs unit tests and bug fixing using only the sample I/O, without any artifact-more plausible for real-world widespread adoption.

We evaluate MapCoder  on seven popular programming synthesis benchmarks including both basic programming like HumanEval, MBPP and challenging competitive program-solving benchmarks like APPS, CodeContests and xCodeEval. With multiple different LLMs including ChatGPT, GPT-4, and Gemini Pro, our approach significantly enhances their problem-solving capabilities - consistently achieving new SOTA performances, outperforming strong baselines like Reflexion (Shinn et al., 2023 ) , and AlphaCodium (Ridnik et al., 2024 ) . Moreover, our method consistently delivers superior performance across various programming languages and varying problem difficulties. Furthermore, with detailed ablation studies, we analyze MapCoder  to provide more insights.

2 Related Work

Program Synthesis: Program synthesis has a long standing history in AI systems (Manna and Waldinger, 1971 ) . A large number of prior research attempted to address it via search/data flow approaches Li et al. ( 2022a ); Parisotto and Salakhutdinov ( 2017 ); Polozov and Gulwani ( 2015 ); Gulwani ( 2011 ) . LMs, prior to LLMs, attempt to generate code by fine-tuning (i.e., training) neural language models (Wang et al., 2021 ; Ahmad et al., 2021 ; Feng et al., 2020 ; Parvez et al., 2018 ; Yin and Neubig, 2017 ; Hellendoorn and Devanbu, 2017 ; Rabinovich et al., 2017 ; Hindle et al., 2016 ) , conversational intents or data flow features (Andreas et al., 2020 ; Yu et al., 2019 ) . Large Language Models: Various LLMs have been developed for Code synthesis  (Li et al., 2022b ; Fried et al., 2022 ; Chen et al., 2021b ; Austin et al., 2021 ; Nijkamp et al., 2022 ; Allal et al., 2023 ) . Recent open source LLMs include Llama-2 (Touvron et al., 2023 ) , CodeLlama-2 (Roziere et al., 2023 ) , Mistral (Jiang et al., 2023a ) Deepseek Coder Guo et al. ( 2024 ) , MoTCoder Li et al. ( 2023 ) that are capable of solving many basic programming tasks.

Prompting LLMs: As indicated in Section 1 , LLM prompting can be summarized into three categories: retrieval Yasunaga et al. ( 2023 ); Parvez et al. ( 2023 , 2021 ) ; planning (Wei et al., 2022b ; Jiang et al., 2023b ) ; debugging (Ridnik et al., 2024 ; Chen et al., 2023 , 2022 ; Le et al., 2022 ) apart from the direct code generation approaches. In contrast, we combine all these paradigms and bridge their gaps (See Table 1 ). Among others, in different contexts of generic problem-solving, Tree-of-thoughts Yao et al. ( 2023 ) , and Cumulative reasoning Zhang et al. ( 2023 ) approaches consider a tree traversal approach to explore different sub-steps towards a solution while our code generation approach mirrors the human programming cycle through various LLM agents. Notably, our traversal does not rely on sub-steps toward the solution but instead utilizes different forms of complete solutions.

3 MapCoder  

Our goal is to develop a multi-agent code generation approach for competitive problem-solving. In order to do so, our framework, MapCoder , replicates the human programming cycle through four LLM agents - retrieval, plan, code, and debug. We devise a pipeline sequence for MapCoder , intelligently cascading the agents in a structured way and enhancing each agent’s capability by augmenting in-context learning signals from previous agents in the pipeline. However, not all the agent responses/outputs are equally useful. Therefore, additionally, MapCoder  features an adaptive agent traversal schema to interact among corresponding agents dynamically, iteratively enhancing the generated code by, for example, fixing bugs, while maximizing the usage of the LLM agents. In this section, we first discuss the agents (as per the pipeline), their prompts, and interactions, followed by the dynamic agent traversal protocol in MapCoder  towards code generation for competitive problem-solving.

3.1 Retrieval Agent

Our first agent, the Retrieval Agent , recalls past relevant problem-solving instances, akin to human memory. It finds k 𝑘 k italic_k (user-defined) similar problems without manual crafting or external retrieval models. Instead, we leverage the LLM agent itself, instructing it to generate such problems. Our prompt extends the analogical prompting principles Yasunaga et al. ( 2023 ) , generating examples and their solutions simultaneously, along with additional metadata (e.g., problem description, code, and plan) to provide the following agents as auxiliary data. We adopt a specific sequence of instructions, which is crucial for the prompt’s effectiveness. In particular, initially, we instruct the LLM to produce similar and distinct problems and their solutions, facilitating problem planning reverse-engineering. Then, we prompt the LLM to generate solution code step-by-step, allowing post-processing to form the corresponding plan. Finally, we direct the LLM to generate relevant algorithms and provide instructional tutorials, enabling the agent to reflect on underlying algorithms and generate algorithmically similar examples.

3.2 Planning Agent

The second agent, the Planning Agent , aims to create a step-by-step plan for the original problem. Our Planning Agent uses examples and their plans obtained from the retrieval agent to generate plans for the original problem. A straightforward approach would be to utilize all examples collectively to generate a single target plan. However, not all retrieved examples hold equal utility. Concatenating examples in a random order may compromise the LLM’s ability to generate accurate planning. For instance, Xu et al. ( 2023 ) demonstrated that even repeating more relevant information (e.g., query) towards the end of the in-context input aids LLM reasoning more effectively than including relatively less relevant contexts. A similar conclusion of "separating noisy in-context data" can also be drawn from the state-of-the-art retrieval augmented generation approaches like Wang et al. ( 2023 ) . Therefore, we generate a distinct target plan for each retrieved example. Additionally, multiple plans offer diverse pathways to success.

To help the generation steps in the following agents with the utility information for each plan, our designed prompt for the planning agent asks the LLM to generate both plans and a confidence score. Figure 2 shows our prompt got this agent.

Refer to caption

3.3 Coding Agent

Next is the Coding Agent . It takes the problem description, and a plan from the Planning Agent as input and translates the corresponding planning into code to solve the problem. During the traversing of agents, Coding Agent takes the original problem and one particular plan from the Planning Agent , generates the code, and test on sample I/O. If the initial code fails, the agent transfers it to the next agent for debugging. Otherwise, predicts that as the final solution.

3.4 Debugging Agent

Finally, the Debugging Agent utilizes sample I/O from the problem description to rectify bugs in the generated code. Similar to humans cross-checking their plan while fixing bugs, our pipeline supplements the Debugging Agent with plans from the Planning Agent . This plan-derived debugging significantly enhances bug fixing in MapCoder , underscoring the pivotal roles played by both the Debugging Agent and the Planning Agent in the generation process. We verify this in Section 6 . For each plan, this process is repeated t 𝑡 t italic_t times. The prompt for this step is illustrated in Figure 3 . Note that, different from Reflexion Shinn et al. ( 2023 ) and AlphaCodium Ridnik et al. ( 2024 ) , our Debugging Agent does not require any additional test case generation in the pipeline.

Refer to caption

3.5 Dynamic Agent Traversal

The dynamic traversal in MapCoder  begins with the Planning Agent , which outputs the plans for the original problem with confidence scores. These plans are sorted, and the highest-scoring one is sent to the Coding Agent. The Coding Agent translates the plan into code, tested with sample I/Os. If all pass, the code is returned; otherwise, it’s passed to Debugging Agent . They attempt to rectify the code iteratively up to t 𝑡 t italic_t times. If successful, the code is returned; otherwise, responsibility shifts back to the Planning Agent for the next highest confidence plan. This iterative process continues for k 𝑘 k italic_k iterations, reflecting a programmer’s approach. We summarize our agent traversal in Algorithm A in Appendix. Our algorithm’s complexity is O ⁢ ( k ⁢ t ) 𝑂 𝑘 𝑡 O(kt) italic_O ( italic_k italic_t ) . An example illustrating MapCoder ’s problem-solving compared to Direct, Chain-of-thought, and Reflexion approaches is in Figure 4 . All detailed prompts for each agent are in Appendix B .

4 Experimental Setup

4.1 datasets.

For extensive evaluation, we have used eight benchmark datasets: five from basic programming and three from complex competitive programming domains. Five basic programming datasets are: HumanEval   Chen et al. ( 2021a ) , HumanEval-ET   Dong et al. ( 2023a ) , EvalPlus   Liu et al. ( 2023 ) , MBPP )  Austin et al. ( 2021 ) , and MBPP-ET   Dong et al. ( 2023a ) . HumanEval-ET, EvalPlus extend HumanEval and MBPP-ET comprehends MBPP by incorporating more test cases. The problem set size of HumanEval and MBPP (and their extensions) are 164 and 397, respectively. Due to the absence of sample I/O in MBPP and MBPP-ET, our approach for code moderation involves randomly removing one test-case from MBPP-ET for each problem and provide this test-case as a sample I/O for the problem. Importantly, this removed test-case is carefully selected to ensure mutual exclusivity from the hidden test sets in MBPP and MBPP-ET. Three competitive programming datasets are: Automated Programming Progress Standard ( APPS ), xCodeEval Khan et al. ( 2023 ) , and CodeContest , where we have used 150, 106, and 156 problems, respectively, in our experiments.

4.2 Baselines

We have compared MapCoder  with several baselines and state-of-the-art approaches. Direct Prompting instructs language models to generate code without explicit guidance, relying on their inherent capabilities of LLM. Chain of Thought Prompting ( CoT ) Wei et al. ( 2022b ) breaks down problems into step-by-step solutions, enabling effective tackling of complex tasks. Self-Planning Prompting Jiang et al. ( 2023b ) divides the code generation task into planning and implementation phases. Analogical Reasoning Prompting Yasunaga et al. ( 2023 ) instructs models to recall relevant problems from training data. Reflexion Shinn et al. ( 2023 ) provides verbal feedback to enhance solutions based on unit test results. Self-collaboration Dong et al. ( 2023b ) proposes a framework where different LLMs act as analyst, coder, and tester to cooperatively generate code for complex tasks, achieving better performance than directly using a single LLM. AlphaCodium Ridnik et al. ( 2024 ) iteratively refines code based on AI-generated input-output tests.

4.3 Foundation Models, Evaluation Metric, k 𝑘 k italic_k , and t 𝑡 t italic_t

With k = t = 5 𝑘 𝑡 5 k=t=5 italic_k = italic_t = 5 in HumanEval, and k = t = 3 𝑘 𝑡 3 k=t=3 italic_k = italic_t = 3 for others, we evaluate all the datasets using ChatGPT (gpt-3.5-turbo-1106) , GPT-4 (gpt-4-1106-preview) from OpenAI and Gemini Pro from Google. We have also evaluated our method using an open-source LLM, Mistral-7B-instruct. We have used the Pass@k evaluation metric, where the model is considered successful if at least one of the k 𝑘 k italic_k generated solutions is correct.

In this section, we evaluate the code generation capabilities of our framework, MapCoder , for competitive problem solving. Our experimental results are reported in Table 2 . Overall, MapCoder  shows a tremendous excellence in code generation, significantly outperforms all baselines, and achieves new state-of-the-art results in all benchmarks. In general the scales with GPT-4 are higher than ChatGPT.

5.1 Performance on basic code generation

The highest scale of performance (Pass@1) scores are observed in simple program synthesis tasks like HumanEval, MBPP in Table 2 . Though with the simpler problem (non-contests) datasets such as HumanEval, HumanEval-ET, the current state-of-the-art method, Reflexion Shinn et al. ( 2023 ) perform reasonably well, this approach does not generalize across varying datasets depicting a wide variety of problems. Self-reflection techniques enhance GPT-4’s performance on HumanEval but result in a 3% decrease on the MBPP dataset. Similarly, with ChatGPT, there’s a notable 26.3% drop in performance where in several cases their AI generated test cases are incorrect. We observe that 8% of failures in HumanEval and 15% in MBPP is caused by their AI generates incorrect test cases while our approach is independent of AI test cases, and consistently improves code generations in general. Consequently, even in HumanEval, with GPT-4, our Pass@1 surpasses Reflexion by ∼ similar-to \sim ∼ 3%. On top, in all four simple programming datasets, MapCoder  enhances the Direct prompting significantly with a maximum of 88% on HumanEvalET by ChatGPT.

Refer to caption

5.2 Performance on competitive problem solving

The significance of MapCoder  shines through clearly when evaluated in competitive problem-solving contexts. Across datasets such as APPS, xCodeEval, and CodeContests, MapCoder  demonstrates substantial enhancements over Direct prompting methods, with improvements of 41.3%, 52.6%, and 132.8% for ChatGPT, and 73.7%, 41.2%, and 135.1% for GPT4, respectively. Notably, the most challenging datasets are APPS and CodeContest, where MapCoder ’s performance stands out prominently. We deliberately compare against strong baselines on these datasets, regardless of whether they are prompt-based or not. Importantly, on CodeContest our Pass@1 results match the Pass@5 scores of the concurrent state-of-the-art model AlphaCodium (Ridnik et al., 2024 ) : 28.5% vs. their 29% (see Table 3 ). Furthermore, our Pass@5 results demonstrate an additional improvement of 12.8%. On APPS, MapCoder  consistently surpasses the Pass@1 scores of all baseline prompts for both ChatGPT and GPT-4.

5.3 Performance with Varying Difficulty Levels

The APPS dataset comprises problems categorized into three difficulty levels: (i) Introductory, (ii) Interview, and (iii) Competition. Figure 6 illustrates the performance of various competitive approaches for these three categories. The results reveal that our MapCoder  excels across all problem categories, with highest gain in competitive problem-solving indicating its superior code generation capabilities in general, and on top, remarkable effectiveness in competitive problem-solving. In order to gather more understanding on what algorithm problems it’s capable of solving and in fact much difficulty level it can solve, we have also conducted a comparison between MapCoder  and the Direct approach, considering the difficulty levels 1 1 1 Difficulty levels in xCodeEval dataset represents an integer number, a higher value means more difficult problem and tags 2 2 2 Tags in xCodeEval dataset represents algorithm type that can be used to solve the problem i.e., greedy, dp, brute-force, constructive, and so on. present in the xCodeEval dataset. The results of this comparison are depicted in Figure 5 . This comparison showcases that MapCoder  is effective across various algorithm types and exhibits superior performance even in higher difficulty levels, compared to the Direct approach. However, beyond (mid-level: difficulties>1000), its gains are still limited.

Refer to caption

5.4 Performance Across Different LLMs

To show the robustness of MapCoder  across various LLMs, we evaluate MapCoder  using Gemini Pro, a different family of SoTA LLM in Table  4 . We also evaluate MapCoder  using an open-source LLM Mistral-7B instruct in Table  5 . As expected, our method shows performance gains over other baseline approaches in equitable trends on both simple (HumanEval) and contest-level problems (CodeContest).

5.5 Performance Across Different Programming Languages

Furthermore, we evaluate model performances using MapCoder  across different programming languages. We utilize the xCodeEval dataset, which features multiple languages. Figure 7 shows that consistent proficiency across different programming languages is achieved by MapCoder  with respect to baselines.

Refer to caption

6 Ablations Studies and Analyses

We present the ablation study of the MapCoder  on HumanEval dataset as the problems are simpler and easy to diagnose by us humans.

6.1 Impact of Different Agents

We have also conducted a study by excluding certain agents from our MapCoder , which helps us investigate each agent’s impact in our whole pipeline. As expected, the results (Table 6 ) show that every agent has its role in the pipeline as turning off any agent decreases the performance of MapCoder . Furthermore, we observe that the Debugging Agent has the most significant impact on the pipeline, as evidenced by a performance drop of 17.5% when excluding this agent exclusively, and an avg performance drop of 24.83% in all cases. The Planning agent has the second best important with avg drop of 16.7% in all cases. In Table 6 ), we perform an ablation study of our multi-agent framework investigate each agent’s impact in our whole pipeline.

6.2 Qualitative Example

To verify the above numerical significance, and to understand how our method enhance the code generation, we have performed a qualitative analysis to find the underlying reason for the superior performance of MapCoder  over other competitive prompting approaches. An example problem and the output with the explanation of Direct, CoT, Reflexion, and MapCoder  prompting is shown in Figure 4 . This example demonstrates how the Debugging Agent fixes the bugs leveraging the plan as a guide from the Planning Agent . This verifies the impact of these two most significant agents. We present more detailed examples in Appendix.

6.3 Impact of k 𝑘 k italic_k and t 𝑡 t italic_t

MapCoder  involves two hyper-parameters: the number of self-retrieved exemplars, k 𝑘 k italic_k , and the number of debugging attempts, t 𝑡 t italic_t . Our findings (Table 7 ) reveal that higher k 𝑘 k italic_k , t 𝑡 t italic_t is proportionate performance gain at the expense of time.

6.4 Impact of Number of Sample I/Os

Given the limited number of sample I/Os in the HumanEval dataset (average of 2.82 per problem), we supplemented it with an additional 5 sample I/Os from the HumanEval-ET dataset. Experiments with this augmented set showed an 1.5% performance gain.

6.5 Error Analysis and Challenges

Although MapCoder  demonstrates strong performance compared to other methods, it faces challenges in certain algorithmic domains. For example, Figure 5 illustrates MapCoder ’s reduced performance on more difficult problems requiring precise problem understanding and concrete planning—capabilities still lacking in LLMs. In the xCodeEval dataset (see Figure 5 ), it solves a limited number of problems in categories like Combinatorics, Constructive, Number Theory, Divide and Conquer, and Dynamic Programming (DP). Manual inspection of five DP category problems reveals occasional misinterpretation of problems, attempts to solve using greedy or brute-force approaches, and struggles with accurate DP table construction when recognizing the need for a DP solution.

7 Conclusion and Future Work

In this paper, we introduce MapCoder , a novel framework for effective code generation in complex problem-solving tasks, leveraging the multi-agent prompting capabilities of LLMs. MapCoder  captures the complete problem-solving cycle by employing four agents - retrieval, planning, coding, and debugging - which dynamically interact to produce high-quality outputs. Evaluation across major benchmarks, including basic and competitive programming datasets, demonstrates MapCoder ’s consistent outperformance of well-established baselines and SoTA approaches across various metrics. Future work aims to extend this approach to other domains like question answering and mathematical reasoning, expanding its scope and impact.

8 Limitations

Among the limitations of our work, firstly, MapCoder  generates a large number of tokens, which may pose challenges in resource-constrained environments. Table 8 shows the number of average API calls and token consumption with the default k 𝑘 k italic_k and t 𝑡 t italic_t (i.e., with respect to the reported performance) while Table 7 ) shows how k 𝑘 k italic_k , t 𝑡 t italic_t can be adjusted to proportionate the performance gain at the expense of time/token. We have not addressed the problem of minimizing tokens/API-calls in this paper and leave it for future works. Secondly, our method currently relies on sample input-output (I/O) pairs for bug fixing. Although sample I/Os provide valuable insights for LLMs’ code generation, their limited number may not always capture the full spectrum of possible test cases. Consequently, enhancing the quality of additional test case generation could reduce our reliance on sample I/Os and further improve the robustness of our approach. Additionally, future exploration of open-source code generation models, such as CodeLLaMa, LLaMa3, Mixtral 8x7B could offer valuable insights and potential enhancements to our approach. Another important concern is that while running machine-generated code, it is advisable to run it inside a sandbox to avoid any potential risks.

  • Ahmad et al. (2021) Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2021. Unified pre-training for program understanding and generation. arXiv preprint arXiv:2103.06333 .
  • Allal et al. (2023) Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, et al. 2023. Santacoder: don’t reach for the stars! arXiv preprint arXiv:2301.03988 .
  • Andreas et al. (2020) Jacob Andreas, John Bufe, David Burkett, Charles Chen, Josh Clausman, Jean Crawford, Kate Crim, Jordan DeLoach, Leah Dorner, Jason Eisner, Hao Fang, Alan Guo, David Hall, Kristin Hayes, Kellie Hill, Diana Ho, Wendy Iwaszuk, Smriti Jha, Dan Klein, Jayant Krishnamurthy, Theo Lanman, Percy Liang, Christopher H. Lin, Ilya Lintsbakh, Andy McGovern, Aleksandr Nisnevich, Adam Pauls, Dmitrij Petters, Brent Read, Dan Roth, Subhro Roy, Jesse Rusak, Beth Short, Div Slomin, Ben Snyder, Stephon Striplin, Yu Su, Zachary Tellman, Sam Thomson, Andrei Vorobev, Izabela Witoszko, Jason Wolfe, Abby Wray, Yuchen Zhang, and Alexander Zotov. 2020. Task-oriented dialogue as dataflow synthesis . Transactions of the Association for Computational Linguistics , 8:556–571.
  • Austin et al. (2021) Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. 2021. Program synthesis with large language models. arXiv preprint arXiv:2108.07732 .
  • Chen et al. (2022) Bei Chen, Fengji Zhang, Anh Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, and Weizhu Chen. 2022. Codet: Code generation with generated tests. arXiv preprint arXiv:2207.10397 .
  • Chen et al. (2021a) Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021a. Evaluating large language models trained on code .
  • Chen et al. (2021b) Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021b. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 .
  • Chen et al. (2023) Xinyun Chen, Maxwell Lin, Nathanael Schärli, and Denny Zhou. 2023. Teaching large language models to self-debug. arXiv preprint arXiv:2304.05128 .
  • Chowdhery et al. (2022) Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311 .
  • Dong et al. (2023a) Yihong Dong, Jiazheng Ding, Xue Jiang, Zhuo Li, Ge Li, and Zhi Jin. 2023a. Codescore: Evaluating code generation by learning code execution. arXiv preprint arXiv:2301.09043 .
  • Dong et al. (2023b) Yihong Dong, Xue Jiang, Zhi Jin, and Ge Li. 2023b. Self-collaboration code generation via chatgpt .
  • Feng et al. (2020) Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, et al. 2020. Codebert: A pre-trained model for programming and natural languages. In Findings of the Association for Computational Linguistics: EMNLP 2020 , pages 1536–1547.
  • Fried et al. (2022) Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, and Mike Lewis. 2022. Incoder: A generative model for code infilling and synthesis. arXiv preprint arXiv:2204.05999 .
  • Gulwani (2011) Sumit Gulwani. 2011. Automating string processing in spreadsheets using input-output examples. ACM Sigplan Notices , 46(1):317–330.
  • Guo et al. (2024) Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao Bi, Y Wu, YK Li, et al. 2024. Deepseek-coder: When the large language model meets programming–the rise of code intelligence. arXiv preprint arXiv:2401.14196 .
  • Hellendoorn and Devanbu (2017) Vincent J. Hellendoorn and Premkumar Devanbu. 2017. Are deep neural networks the best choice for modeling source code? In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering , ESEC/FSE 2017, pages 763–773, New York, NY, USA. ACM.
  • Hindle et al. (2016) Abram Hindle, Earl T. Barr, Mark Gabel, Zhendong Su, and Premkumar Devanbu. 2016. On the naturalness of software . Commun. ACM , 59(5):122–131.
  • Huang et al. (2023) Dong Huang, Qingwen Bu, Jie M Zhang, Michael Luck, and Heming Cui. 2023. Agentcoder: Multi-agent-based code generation with iterative testing and optimisation. arXiv preprint arXiv:2312.13010 .
  • Jiang et al. (2023a) Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023a. Mistral 7b .
  • Jiang et al. (2023b) Xue Jiang, Yihong Dong, Lecheng Wang, Qiwei Shang, and Ge Li. 2023b. Self-planning code generation with large language model. arXiv preprint arXiv:2303.06689 .
  • Khan et al. (2023) Mohammad Abdullah Matin Khan, M Saiful Bari, Xuan Long Do, Weishi Wang, Md Rizwan Parvez, and Shafiq Joty. 2023. xcodeeval: A large scale multilingual multitask benchmark for code understanding, generation, translation and retrieval. arXiv preprint arXiv:2303.03004 .
  • Knuth (1992) Donald E Knuth. 1992. Literate programming. CSLI Lecture Notes, Stanford, CA: Center for the Study of Language and Information (CSLI), 1992 .
  • Le et al. (2022) Hung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, and Steven Chu Hong Hoi. 2022. Coderl: Mastering code generation through pretrained models and deep reinforcement learning. Advances in Neural Information Processing Systems , 35:21314–21328.
  • Li et al. (2023) Jingyao Li, Pengguang Chen, and Jiaya Jia. 2023. Motcoder: Elevating large language models with modular of thought for challenging programming tasks. arXiv preprint arXiv:2312.15960 .
  • Li et al. (2022a) Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. 2022a. Competition-level code generation with alphacode. Science , 378(6624):1092–1097.
  • Li et al. (2022b) Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de Masson d’Autume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel Mankowitz, Esme Sutherland Robson, Pushmeet Kohli, Nando de Freitas, Koray Kavukcuoglu, and Oriol Vinyals. 2022b. Competition-level code generation with alphacode.
  • Liu et al. (2023) Jiawei Liu, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. 2023. Is your code generated by chatGPT really correct? rigorous evaluation of large language models for code generation . In Thirty-seventh Conference on Neural Information Processing Systems .
  • Manna and Waldinger (1971) Zohar Manna and Richard J. Waldinger. 1971. Toward automatic program synthesis . Commun. ACM , 14(3):151–165.
  • Nijkamp et al. (2022) Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. 2022. Codegen: An open large language model for code with multi-turn program synthesis. arXiv preprint arXiv:2203.13474 .
  • Pacheco et al. (2007) Carlos Pacheco, Shuvendu K Lahiri, Michael D Ernst, and Thomas Ball. 2007. Feedback-directed random test generation. In 29th International Conference on Software Engineering (ICSE’07) , pages 75–84. IEEE.
  • Parisotto and Salakhutdinov (2017) Emilio Parisotto and Ruslan Salakhutdinov. 2017. Neural map: Structured memory for deep reinforcement learning. arXiv preprint arXiv:1702.08360 .
  • Parvez et al. (2021) Md Rizwan Parvez, Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2021. Retrieval augmented code generation and summarization. arXiv preprint arXiv:2108.11601 .
  • Parvez et al. (2018) Md Rizwan Parvez, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2018. Building language models for text with named entities . In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 2373–2383, Melbourne, Australia. Association for Computational Linguistics.
  • Parvez et al. (2023) Md Rizwan Parvez, Jianfeng Chi, Wasi Uddin Ahmad, Yuan Tian, and Kai-Wei Chang. 2023. Retrieval enhanced data augmentation for question answering on privacy policies . In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics , pages 201–210, Dubrovnik, Croatia. Association for Computational Linguistics.
  • Polozov and Gulwani (2015) Oleksandr Polozov and Sumit Gulwani. 2015. Flashmeta: A framework for inductive program synthesis. In Proceedings of the 2015 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications , pages 107–126.
  • Rabinovich et al. (2017) Maxim Rabinovich, Mitchell Stern, and Dan Klein. 2017. Abstract syntax networks for code generation and semantic parsing . CoRR , abs/1704.07535.
  • Ridnik et al. (2024) Tal Ridnik, Dedy Kredo, and Itamar Friedman. 2024. Code generation with alphacodium: From prompt engineering to flow engineering. arXiv preprint arXiv:2401.08500 .
  • Roziere et al. (2023) Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, et al. 2023. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950 .
  • Shinn et al. (2023) Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik R Narasimhan, and Shunyu Yao. 2023. Reflexion: Language agents with verbal reinforcement learning. In Thirty-seventh Conference on Neural Information Processing Systems .
  • Shum et al. (2023) Kashun Shum, Shizhe Diao, and Tong Zhang. 2023. Automatic prompt augmentation and selection with chain-of-thought from labeled data . In Findings of the Association for Computational Linguistics: EMNLP 2023 , pages 12113–12139, Singapore. Association for Computational Linguistics.
  • Touvron et al. (2023) Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models . arXiv preprint arXiv:2307.09288 .
  • Wang et al. (2021) Yue Wang, Weishi Wang, Shafiq Joty, and Steven CH Hoi. 2021. Codet5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. In EMNLP , pages 8696–8708.
  • Wang et al. (2023) Zhiruo Wang, Jun Araki, Zhengbao Jiang, Md Rizwan Parvez, and Graham Neubig. 2023. Learning to filter context for retrieval-augmented generation. arXiv preprint arXiv:2311.08377 .
  • Wei et al. (2022a) Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022a. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems , 35:24824–24837.
  • Wei et al. (2022b) Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022b. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems , 35:24824–24837.
  • Xu et al. (2023) Xiaohan Xu, Chongyang Tao, Tao Shen, Can Xu, Hongbo Xu, Guodong Long, and Jian guang Lou. 2023. Re-reading improves reasoning in language models .
  • Yao et al. (2023) Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. 2023. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601 .
  • Yasunaga et al. (2023) Michihiro Yasunaga, Xinyun Chen, Yujia Li, Panupong Pasupat, Jure Leskovec, Percy Liang, Ed H Chi, and Denny Zhou. 2023. Large language models as analogical reasoners. arXiv preprint arXiv:2310.01714 .
  • Yin and Neubig (2017) Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation . CoRR , abs/1704.01696.
  • Yu et al. (2019) Tao Yu, Rui Zhang, Heyang Er, Suyi Li, Eric Xue, Bo Pang, Xi Victoria Lin, Yi Chern Tan, Tianze Shi, Zihan Li, Youxuan Jiang, Michihiro Yasunaga, Sungrok Shim, Tao Chen, Alexander Fabbri, Zifan Li, Luyao Chen, Yuwen Zhang, Shreya Dixit, Vincent Zhang, Caiming Xiong, Richard Socher, Walter Lasecki, and Dragomir Radev. 2019. CoSQL: A conversational text-to-SQL challenge towards cross-domain natural language interfaces to databases . In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages 1962–1979, Hong Kong, China. Association for Computational Linguistics.
  • Zhang et al. (2023) Yifan Zhang, Jingqin Yang, Yang Yuan, and Andrew Chi-Chih Yao. 2023. Cumulative reasoning with large language models. arXiv preprint arXiv:2308.04371 .
  • Zhang et al. (2022) Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. 2022. Automatic chain of thought prompting in large language models. arXiv preprint arXiv:2210.03493 .
  • Zhou et al. (2023) Andy Zhou, Kai Yan, Michal Shlapentokh-Rothman, Haohan Wang, and Yu-Xiong Wang. 2023. Language agent tree search unifies reasoning acting and planning in language models. arXiv preprint arXiv:2310.04406 .

Appendix A Algorithm of MapCoder

Algorithm 1 shows the pseudo-code of our prompting technique.

Appendix B Details Promptings of MapCoder

The detailed prompting of the Retrieval Agent, Planning Agent, Coding Agent, and Debugging Agent are shown in Figure 8 , 9 , and 10 respectively. Note that we adopt a specific sequence of instructions in the prompt for Retrieval Agent which is a crucial design choice.

Appendix C Example Problem

Two complete examples of how MapCoder  works by showing all the prompts and responses for all four agents is given in this link .

Refer to caption

  • CPA NEW SYLLABUS 2021
  • KCSE MARKING SCHEMES
  • ACTS OF PARLIAMENT
  • UNIVERSITY RESOURCES PDF
  • CPA Study Notes
  • INTERNATIONAL STANDARDS IN AUDITING (ISA)
  • Teach Yourself Computers

KNEC / TVET CDACC STUDY MATERIALS, REVISION KITS AND PAST PAPERS

Quality and Updated

KNEC, TVET CDACC NOTES AND PAST PAPERS

DIPLOMA MATERIALS

  • KNEC NOTES –  Click to download
  • TVET CDACC PAST PAPERS – Click to download

CERTIFICATE MATERIALS

  • KNEC CERTIFICATE NOTES – Click to download

IMAGES

  1. The Simple Guide to Problem Mapping (only 4 steps)

    programming for problem solving co po mapping

  2. 6 Ways to Improve Your Programming Problem Solving

    programming for problem solving co po mapping

  3. CO-PO Mapping

    programming for problem solving co po mapping

  4. Co Po Mapping

    programming for problem solving co po mapping

  5. Programming for Problem Solving

    programming for problem solving co po mapping

  6. Programming for Problem Solving

    programming for problem solving co po mapping

VIDEO

  1. OUTCOME BASED ASSESSMENT (OBA) CO

  2. CO PO MAPPING AND ATTAINMENT

  3. SOFTWARE ENGINEERING and TESTING CO, PO & PSO

  4. CO PO Mapping procedure

  5. CO PO Mapping Explained

  6. Course CO PO mapping

COMMENTS

  1. PDF CO PO PSO MAPPING GE3151 Problem Solving and Python Programming

    CO - PO - PSO MAPPING GE3171 PROBLEM SOLVING AND PYTHON PROGRAMMING LABORATORY LIST OF COURSE OUTCOMES CO1: Develop algorithmic solutions to simple computational problems CO2: Develop and execute simple Python programs. CO3: Implement programs in Python using conditionals and loops for solving problems.

  2. PDF Mapping between CO and PO

    C Programming with Data Structure Students are able to: CO2.1 Develop their programming Skills. CO2.2 Increase ability to code a given logic in C language. CO2.3 Write algorithms for solving any problems. CO2.4 Use functions and pointers to write program. CO2.5 Identify which Data Structure suits for particular problem.

  3. PDF CO'S & CO-PO MAPPING

    CO'S & CO-PO MAPPING FOR MASTER OF COMPUTER APPLICATION (MCA) (W.E.F 2019-2020) ... To enhance foundation of mathematics, computer application and problem solving methodology ... Programming 3 1 0 4 25 15 40 60 100 5. Core CA404 Combinatorics and Graph Theory ...

  4. PDF CO-PO,PSO Mapping

    Python Programming BCC 302 7 WD Workshop BCS 353 ... CO5 Develop Python-based projects by inculcating creativity and originality in problem-solving 6 P, M CO \ PO Mapping PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12 PSO1 PSO2 ... CO5 Validate the designed solution to ensure impactfulness towards the selected problem. 5 C,P,M CO \ PO ...

  5. PDF GE8151-PROBLEM SOLVING & PYTHON PROGRAMMING

    GE8151-PROBLEM SOLVING & PYTHON PROGRAMMING Course Outcomes ... C105.5 Explain files, exception, modules and packages in Python for solving problems. Mapping of Cos with POs Course PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12 C105.1 3 2 1 - - 1 - - - - - 1 C105.2 3 2 1 - 2 1 - - - - - 1 ...

  6. PDF CRITERION 2 Course Outcomes and Program Outcomes 75

    Program level Course-Program Outcome (POs) matrix and Program Specific Outcomes (PSOs) of all courses for the above listed courses are given in the Table 2.1.3(b) Mapping Matrix Course Code CO vs PO CO vs PSO PO1 PO2 PO3 PO4 PO5 PO6 PO7 PSO1 PSO2 IS-16001 3 2 1.8 0 1 0 0 1 0 IS-16002 1.2 2.4 2.4 0.8 0.8 0 0 1 0.2

  7. PDF COs and CO-PO Mapping (2019-20)

    CO -PO-PSO mapping table CO/ PO PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12 PSO1 PSO2 PSO3 CO1 3 2 1 CO1 3 CO2 2 3 3 CO2 3 CO3 3 3 2 2 3 CO3 3 CO4 2 2 2 2 2 3 3 2 CO4 3 Avg 2.5 2 1.5 2.33 2 2.5 2.67 2.67 2.5 Avg 3 SAHRDAYA COLLEGE OF ENGINEERING AND TECHNOLOGY, KODAKARA Department of Computer Science and Engineering CS201 Discrete ...

  8. PDF CO-PO-PSO Mapping

    CO-PO-PSO MAPPING (1-Low, 2-Moderate, 3-High) Develops the ability to analyze a problem and develop algorithm to solve it. |llustrates the concept of variable, data type, operators and also demonstrates other ONepS. JApply the concept of user defined function and recursion to support reusability Design of efticient c program using branching and ...

  9. PDF Co Po Pso Mapping Cs8251 Programming in C

    CO - PO - PSO MAPPING GE8151 PROBLEM SOLVING AND PYTHON PROGRAMMING LIST OF COURSE OUTCOMES CO1 Students will be able to develop algorithmic solutions to simple computational problems. CO2 Students will be able to read and write, execute simple python programs using I/O and Control Statements. CO3 Students will be able to Structure simple Python programs for solving problems and to Decompose a

  10. PDF COURSE OUTCOMES MAPPING COs WITH POs AND PSOs

    MAPPING OF COURSE OUTCOMES WITH PROGRAM OUTCOMES AND PROGRAM SPECIFIC OUTCOME Course Out Comes Level of CO Program Outcomes Program Specific Outcomes K3 K4 K4 K5 K3, K5, K6 A3 A2 A3 A3 A3 A3 A2 K4 K4 K4 PO-1 PO-2 PO-3 PO-4 PO-5 PO-6 PO-7 PO-8 PO-9 ... GE8161 - PROBLEM SOLVING AND PYTHON PROGRAMMING LAB COURSE OUTCOMES

  11. PDF CO-PO Mapping 2020-21 Term-I & II COURSE OUTCOME PO1 PO2 PO3 PO4 PO5

    Programming for Problem Solving Programming for Problem Solving 1. To formulate simple algorithms for arithmetic and logical problems 32 1 2. Understand the fundamentals of C programming. 3 2 1 3. To test and execute the programs and correct syntax and logical errors 3 2 1 4. Choose the loops and decision making statements to solve the problem ...

  12. PDF Course Outcomes & CO-PO-PSO Mapping and Justification

    18CS55.4 Interpret the concepts of object oriented programming using Python L2 18CS55.5 Determine the need for scraping websites and working with CSV, JSON and other file formats.. L2 CO-PO-PSO MAPPING CO No. PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO 10 PO 11 PO 12 PSO 1 PSO 2 PSO 3 18CS55.1 3 2 3 - 2 - - - - - - 2 3 - -

  13. CO-PO-PSO-CSE

    Department of Computer Science and Engineering. CO-PO-PSO-CSE. GE8151-PROBLEM SOLVING & PYTHON PROGRAMMING. CS6201 Digital Principles and System Design CS6202 Programming and Data Structures CS6212 Programming and Data Structures Laboratory CS6301 Programming and Data Structure CS6302 Database Management Systems. CS6303 Computer Architecture.

  14. PDF Course Outcomes Program Outcomes Mapping

    CO - PO MAPPING DEPARTMENT OF ELECTRICAL AND ELECTRONICS ENGINEERING . 1 About the Department ... 20CS111 Problem Solving Using C Programming 3/0/2/4 Nature of Course C (Theory Concept) Course Outcomes: Upon completion of the course, students shall have ability to C111.1 Apply problem solving techniques to solve real world problems [AP] ...

  15. CO-PO Mapping

    B.Tech. (CSE) CO-PO MAPPING. COs. Principles of Programming Language (ES-227A) CO1. Explain the concepts of programming language, the general problems and methods related to syntax and semantics. CO2. Interpret the structured data objects, sub programs and programmer defined data type. CO3.

  16. PDF I Year / I Semester & II Semester R-2017

    CO-PO Mapping - 2017 Regulation RMK College of Engg & Tech, CSE Dept GE8151 PROBLEM SOLVING AND PYTHON PROGRAMMING Upon Completion of the course, the students will be able to Course Outcomes Description Level in Bloom's Taxonomy C105.1 Discuss the logical solutions through Flowcharts, Algorithms and Pseudo code K2

  17. PDF Course Outcomes & CO-PO-PSO Mapping and Justification

    CO No. PO/PSO CL Justification 18CS45.1 PO1 3 Strongly mapped as the students need the knowledge of basic Java syntax and semantics to apply them in building applications which needs java programming constructs PO2 2 Moderately mapped as problem analysis is necessary for solving /developing any application using basic java programming ...

  18. PDF Co-po Attainment Handbook

    CO-PO ATTAINMENT HANDBOOK CONTENT Page No. 1. Mission & Vision Statements 3 2. Programme Educational Objectives (PEO) 4 3. Programme Outcomes (PO) 4 4. Programme Specific Outcomes (PSO) 5 5. Terminology 6 6. Level of Correlation and Attainment 8 7. Attainment of Cos 8 8. Calculation of CO Attainment 10 9. Calculation of PO/PSO Attainment 14 10.

  19. PDF Jeppiaar Institute of Technology

    CO & PO and PSO Mapping. CO101.3. Gain the basic grammar techniques and utilize it in enhancing language development. PSO-2. Course No. Level of CO. Program Outcomes. ... GE8151-problem solving and python programming Department: IT. Year/Sem: I/I CO & PO and PSO Mapping Course No. Level of CO PSO-2. CO104.4 Enumerate various solid, liquid and ...

  20. PDF mailamengg.com

    CONSOLIDATED MAPPING OF CO-PO AND CO-PSO Correlation Levels: l: Slight (Low) 2: Moderate (Medium) 3: Substantial (High) PO Mapping Matrices for 2017-2021 batch is given in the below table: No Correlation COURSE OUTCOMES PO PO 11 12 ... Problem Solving and Python Programming Laboratory Physics and Chemistry Laboratory Technical English

  21. PDF Course Outcomes & CO-PO-PSO Mapping and Justification

    CO No. PO/PS O CL Justification 15CSP78/85.1 PO1 3 Strongly mapped as the students will be able to analyze the problem to be implemented. PO2 3 Strongly mapped as the students will be able to formulate the problem based on the literature survey carried out. PO3 1 Slightly mapped as the students will be able to find a solution for the problem ...

  22. MapCoder: Multi-Agent Code Generation for Competitive Problem Solving

    3 MapCoder. Our goal is to develop a multi-agent code generation approach for competitive problem-solving. In order to do so, our framework, MapCoder, replicates the human programming cycle through four LLM agents - retrieval, plan, code, and debug. We devise a pipeline sequence for MapCoder, intelligently cascading the agents in a structured ...

  23. PDF Steps for preparation of CO-PO attainment

    Step 3: Outline the Program Outcomes (PO) and Program Specific Outcomes (PSO). Perform the CO mapping with PO and PSO, as illustratively shown in the table. Step 4: Perform the CO mapping with PO and PSO, as illustratively shown in the table. Here in the table, '3' corresponds to a high correlation; '2' corresponds to a medium correlation, and '1' corresponds to a low correlation,

  24. PDF Mapping of Course Outcomes with Program Outcomes B.Tech (2018 ...

    Justification of mapping: CO Number PO/PSO Number Correlati on Level (1/2/3) Justification of mapping CO1 PO1 1 Slightly mapped as engineering fundamentals are required to solve complex problems. PO2 1 Slightly mapped as programming principles are used to apply to solve complex engineering problems.

  25. Decision-making

    Problem solving is the process of investigating the given information and finding all possible solutions through invention or discovery. Traditionally, it is argued that problem solving is a step towards decision making, so that the information gathered in that process may be used towards decision-making. [page needed]

  26. Knec / Tvet Cdacc Study Materials, Revision Kits and Past Papers

    KNEC, TVET CDACC NOTES AND PAST PAPERS. DIPLOMA MATERIALS. KNEC NOTES - Click to download. TVET CDACC PAST PAPERS - Click to download. CERTIFICATE MATERIALS. KNEC CERTIFICATE NOTES - Click to download. (Visited 109,270 times, 72 visits today)

  27. Nigeria

    Nigeria, officially the Federal Republic of Nigeria, is a country in West Africa. It is situated between the Sahel to the north and the Gulf of Guinea to the south in the Atlantic Ocean.It covers an area of 923,769 square kilometres (356,669 sq mi); with a population of over 230 million, it is the most populous country in Africa, and the world's sixth-most populous country.