Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Electoral Tribunals and Democratic Consolidation in Nigeria: Interrogating the 2019 Post Election Litigations in Imo State

Profile image of tukura tino

International Journal of Education and Social Science Research

The judiciary is one of the important institutions of government that constitutes the spice that help the meals of democracy to be tasty. Over time, the judiciary in Nigeria has made strides in its effort to ensure the deepening or consolidation of democracy especially in the Fourth Republic which started on 29th May, 1999. However, the recent litigations that throw up judiciary decisions from the Supreme Court tend to be generating so much arguments from individuals as to whether the judiciary could continue to be the hope of a common man. This paper therefore, attempt to bridge this gap by looking at the critical issues from the 2019 post-election litigations of the Supreme Court. The paper adopted the documentary method of data collection and utilizes secondary sources. This paper by employing the theory of post-colonial state argues that the very relative autonomy and lack of independent of the judiciary especially in the appointment of CJN and other judicial official account fo...

Related Papers

Journal of African elections

David Enweremadu

write an essay about 2019 governorship tribunal

FUDMA Journal of Politics and International Affairs (FUJOPIA)

Ndubuisi Uchechukwu

This study examines Election Petition Tribunals and democratisation in Nigeria. Nigeria's democracy has been characterized with continuous disagreement after elections and the role of election petition tribunals towards stabilizing the polity and thus sustaining Nigeria's democracy cannot be neglected. The Election Petition Tribunals are part of Nigeria's judiciary and remain the first point of call in post election matters. While adopting the qualitative descriptive method of data analysis, data were collected from secondary sources of data for the study. The Lockean theory on legitimacy was adopted as the theoretical guide for the paper. It was found that the implied distrust on Nigeria's electoral system and institution has always raised questions on the credibility of the electoral system. It was also discovered that until actors involved in the electoral process go through the courts, the validity and legality of the process cannot be determined. With the increasing cases bothering on Nigeria elections that are presented before the Election Petition Tribunals, the general will of the people becomes dependent on the actions and inactions of our judicial system. It is recommended among others that the electoral management bodies and the Nigerian judiciary should be able to build confidence among the Nigerian populace, thus strengthening Nigeria's democracy.

Dr. Usman Bappi

tyavwase aver

Hagler Okorie

California Linguistic Notes

Adesina B . Sunday

Segun Isaac Aderibigbe

Abdullahi Soliu Dagbo

Soliu Dagbo Abdullahi

Democracy is widely recognized to be the best practice as the governments only emerge through the explicit consent of the people expressed by their votes. Direct electoral link between the people and their elected representatives is the cornerstone that guarantees the legitimacy of a democratic government; once the link is broken, legitimacy erodes. Hence, the will of the people is at the heart of democracy. Accordingly, this paper identifies judicial activism as a variable tool to advance the will of the people on the stage of democracy in Nigeria. The paper critically examines the concept of judicial activism, its tolerability and the necessity of the Nigerian court to activate activism in election disputes. The research goes to show that although the courts have braced up the challenges of time and realize the need to exhibit judicial activism, more needs to be done in respect to election cases. It concludes that the judiciary must step up and assert its role as a non-partisan umpire to restore electoral process to its democratic character.

Emma Etim , Augustine Akah

One of the major factors affecting democracy in Nigeria is the loss of confidence by the masses on the judiciary and its rulings when there is an infringement on individual or collective rights and privileges. This research examines the influence of politics over law; specifically electorates' perceptions of the Akwa Ibom State election petition tribunal and courts verdicts. The survey research design was utilized in this study for the collection of factual data that are measurable and quantifiable. Data were collected using questionnaires as well as the aggregated observation of the researchers. The study also relied on systematic qualitative content analysis of secondary sources of data. Two hypotheses were generated and tested using Chi-square (X2) at 0.05 level of significance. This paper argues that the judiciary's handling of the case has been one that shows the pursuit of equity and justice rather than one which clings to the whims and caprices of political juggernaut, and at such, the court decisions have been independent of political preferences. It is recommended that political parties and powerful individuals should refrain from all illegal means to attain public positions which are the basis for election petitions. Also, if a rerun election would be conducted, INEC officials are expected to be politically neutral, and security agencies must be on the look-out to ensure peace and order.

Joseph Ekong

In all liberal democracies across the globe today, the electoral laws are enacted to guide the electoral procedures, processes and systems aimed at guaranteeing smooth transition and legitimization of government through free, fair, credible and periodic elections. The consistent adherence to the electoral laws helps in the consolidation of democracy. From 1999 to date, the Nigeria’s electoral laws have undergone a metamorphosis of reforms. The country’s prevailing electoral law is the ‘2010 Electoral Act’ (as amended) which has been grossly violated in its preceding and present status by the various political actors during the 1999, 2003, 2007, 2011 and 2015 general elections. Therefore, this paper argues that the constant rape and violations of the various sections of the electoral law in Nigeria has adversely affected the country’s electoral process/system and pace of democratic consolidation. In attaining the status of democratic consolidation in Nigeria, it is recommended that the extant electoral law in the country be revamped in order to strengthen the capacity of the electoral umpire (Independent National Electoral Commission-INEC) to effectively monitor and enforce the compliance to the existing electoral laws by all stakeholders. In addition, this paper posits that the establishment of an independent electoral offences commission in the country will help in the prompt sanctioning of electoral offenders, thereby deterring potential electoral deviants. The secondary sources of data are employed for this work. Keywords: Democracy, Democratic Consolidation, Electoral Laws

Why a rise in court cases is bad for democracy in Nigeria

write an essay about 2019 governorship tribunal

Research Associate, University of Bristol

Disclosure statement

Ini Dele-Adedeji does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

University of Bristol provides funding as a founding partner of The Conversation UK.

View all partners

One year after the 2019 general elections in Nigeria, courts are still busy deciding who the winners were in dozens of them.

One of the most recent cases was in Bayelsa State . The candidate of the All Progressives Congress was initially thought to have won the election. But, he was sacked by the Supreme Court 24 hours before his swearing-in ceremony because, the court found, his running mate had presented fake documents and was therefore disqualified. You can’t be a candidate without a qualified running mate.

There is also a case in Imo State . There, the candidate of the Peoples Democratic Party was sworn into office like others on 29 May 2019. But he was removed by the Supreme Court following a dispute over the electoral result. In its ruling, the court declared the candidate of the All Progressives Congress the winner. He’d come fourth at the polls.

Looking at the rate at which courts, rather than the electorate, end up determining actual winners of the polls, is the credibility of the Nigerian elections at stake?

I believe the answer is yes.

The power to determine who is elected into political office ought to be decided by voters. Judicial recourse is perfectly allowed and is preferable to extra-judicial measures to redress perceived electoral slights. But this should be an exceptional option taken to rectify an electoral impropriety of some sort.

But, it’s not the exception in Nigeria. The Independent National Electoral Commission recently announced that it had so far withdrawn 64 certificates of return –documents issued to election winners – and reissued them to people declared winners by courts of law following the 2019 general elections. The election saw 1,031 candidates contested for presidential, governorship, national assembly and state houses of assembly seats .

The reality is that there’s merit to a large majority of the cases brought before the law courts seeking electoral redress. This is because electoral malpractice has become part of Nigeria’s electoral culture. These malpractices take place before, during and after elections. Some of the most common examples include multiple thumb-printing, falsification of result sheets, fake ballot papers, manipulation of voter registration and the use of violence to disrupt voting.

The history

There is precedence for the Nigerian courts acting as a last resort in cases of electoral result disputes. Arguably the most monumental episode was the case between the late Obafemi Awolowo and late Shehu Shagari following the 1979 presidential election.

Awolowo, a Nigerian nationalist and statesman who played a key role in the country’s independence movement, was a presidential candidate of the Unity Party of Nigeria in that year’s poll. Shagari, was a presidential candidate of the National Party of Nigeria. Shagari won, emerging as Nigeria’s first democratically elected president.

But Awolowo contested Shagari’s victory on the grounds that it had not satisfied the requirement in the electoral decree of the time that the winner had to secure one quarter of the votes cast in two thirds of all the states of the federation.

The election tribunal dismissed Awolowo’s claim and the case came before the Supreme Court. The judges also ruled in favour of Shagari except for the dissenting judgment of Justice Kayode Eso.

The current situation is different because of the rate at which election results are being annulled. This means that the courts are essentially determining the winners.

It unnecessarily places the courts and judges under the spotlight and the attendant pressure that comes with it, since it shifts the role of the judiciary from being an umpire to an arbiter.

write an essay about 2019 governorship tribunal

Weaknesses in the system

Nigeria has a strong Electoral Act . It has been amended a few times over the years and it is not different from the electoral constitutions being used in other democratic climes.

But the law can only go so far. The bigger problem is an absence of strong democratic institutions to support it. The strengthening of democratic institutions, I would argue, would result in an increase in free and fair elections.

In particular the electoral commission and the police force need to be strengthened. The police are usually left out during elections. Instead of being trained and given the wherewithal to assist electoral commission officials in safeguarding voters, electoral officials and ballot centres, the army is usually deployed during elections. This puts the police and army at cross purposes. It also increases the possibility of violence ensuing.

Another problem is the Independent National Electoral Commission. The root of a lot of the election-related cases brought before the courts can be traced to its limited ability to anticipate and address known recurring election-related problems. Examples include it’s inability to secure ballot boxes and tally votes in a timely fashion.

These things could be achieved if the commission was strengthened by the executive and given the statutory, logistical, financial support, and independence it requires.

write an essay about 2019 governorship tribunal

Who benefits?

Politicians – and those close to them – are the only ones to benefit from the current state of affairs. The Nigerian voting public will always come off worse. This is because voters are likely to become apathetic about voting if they feel that their vote doesn’t matter in the grand scheme of things. Low voter turnout is an indictment of the electoral process.

In addition, the argument over whether the courts are partial or impartial is a moot one. The fact remains that appointments to positions in almost every aspect of Nigeria’s public sector are politically influenced. Nigerians are, therefore, right to question the partiality – or otherwise – of the courts.

The current trend also has the potential to embolden politicians to forego the polls and instead try to “win” elections by influencing the judiciary in underhand ways.

Making a habit of by-passing elections as a means of determining elected officials due to electoral irregularities, and forcing the judiciary to constantly have to annul elections doesn’t bode well for Nigeria’s fledgling democracy.

Read more: Discrediting elections: why the opposition playbook carries risks

  • African politics
  • Peacebuilding
  • Democracy in Africa
  • Elections in Africa
  • Nigerian laws
  • Disenfranchisement
  • Nigeria elections
  • Nigeria elections 2019
  • Independent National Electoral Commission
  • Election fraud
  • Democracy Day 2022

write an essay about 2019 governorship tribunal

Head of School, School of Arts & Social Sciences, Monash University Malaysia

write an essay about 2019 governorship tribunal

Chief Operating Officer (COO)

write an essay about 2019 governorship tribunal

Clinical Teaching Fellow

write an essay about 2019 governorship tribunal

Data Manager

write an essay about 2019 governorship tribunal

Director, Social Policy

  • Advertise with us
  • Saturday, June 08, 2024

Most Widely Read Newspaper

PunchNG Menu:

  • Special Features
  • Sex & Relationship

ID) . '?utm_source=news-flash&utm_medium=web"> Download Punch Lite App

Tribunal judgment replay of 2019 script, says NNPP

Abba Kabir Yusuf

Kano State Governor, Abba Yusuf

Kindly share this story:

  • Court rules on Kano emirate crisis June 13
  • Lagos NNPP lauds Kano gov over Sanusi’s reinstatement
  • Emirate tussle: Kano governor visits NSA in Abuja

The PUNCH journalist, El-ameen, has over nine years of experience reporting politics, civil liberties organisation, fact-check, and climate change

  • Nasiru Gawuna
  • New Nigeria Peoples Party

All rights reserved. This material, and other digital content on this website, may not be reproduced, published, broadcast, rewritten or redistributed in whole or in part without prior express written permission from PUNCH.

Contact: [email protected]

Stay informed and ahead of the curve! Follow The Punch Newspaper on WhatsApp for real-time updates, breaking news, and exclusive content. Don't miss a headline – join now!

Nigerians can now earn upto 20% returns by investing in treasury bills, fixed income, commodities and real estate. Managing investments is best on this app. start now

Food prices are going up, but smart Nigerians are making millions from trading foods and commodities on this app. Start here

Follow Punch on Whatsapp

Latest News

Nigeria's inflation rate drops 14% by 2029, imf predicts , why i didn't use boniface against south africa - finidi, mfm denies framing ex-members in alleged church robbery plot, dss combing abia for soldiers’ killers – otti, suspects supplying adulterated drink to benin republic arrested in lagos.

airtel-tenency-ad

Minimum wage: FG targets N62,000, govs want N57,000

Lorem ipsum dolor sit amet, conse adipiscing elit.

Portable and his queens

Inside the courts and challenging election outcomes

write an essay about 2019 governorship tribunal

Litigating election disputes is contentious, complex, and excessively technical. The technicality of electoral dispute litigation is fueled by the strict requirements of the Electoral Act, coupled with judicial attitudes over the years. The complex and technical nature of election petitions is largely responsible for the failure of election tribunals and courts to address the grievances of litigants despite efforts at resolving such election disputes.

Disclaimer :  Opinions expressed in this commentary are those of the author and do not necessarily represent the institutional position of International IDEA, its Board of Advisers or its Council of Member States.   

As expected, political attention is shifting to the courts as aggrieved candidates and political parties that contested in Nigeria's 2023 general elections are approaching the courts to challenge the outcome of the polls and seeking legal remedies. The polling unit was the arena of electoral competition a few weeks ago, but the courts have displaced the polling units as the new arena for electoral contests. As it stands, the courts will determine the final vote in all election disputes it entertains, raising further concerns about the apparent excessive judicialization of the electoral process.   

The process of registering a complaint or challenging the outcome of the election is called an election petition. Election tribunals or the courts address grievances with election results ventilated by litigants. Unlike other cases, election petitions are special cases in a class of their own. Due to their special nature, the procedures, courts, and timelines for filing documents are unique. Some technical defects or irregularities considered immaterial in other proceedings could be fatal to proceedings in election petitions. In addition, election petition tribunals and courts have adopted a strict constructionist approach which admits no extension to the timelines provided in the legal framework. Let’s consider five critical components of Nigeria’s election adjudication process.

Not all persons can be parties to an election dispute

Different categories of persons participate in elections, but not all possess the right to challenge or question the result of an election. Section 133 of the Electoral Act 2022 defines persons entitled to present an election petition. They include candidates in an election and a political party that participated in the election. On the other hand, parties also include a person whose election is questioned as a party to an election petition. Where the complaint is against a permanent or ad-hoc official of the Independent National Electoral Commission (INEC), INEC will be listed as a party due to its role in the administration of elections. Nigeria's electoral law considers these persons necessary parties in an election petition. An election petition will suffer an ill fate if these parties are excluded.

The person (s) or political party that initiates or files an election petition is referred to as the Petitioner, while the person or party the petition is made against is called the Respondent. In most cases, the Petitioner will seek to establish that the candidate INEC declared the winner was not validly elected or that they are entitled to be declared the winner. The respondents will include the person or party declared the election winner. A tribunal or Court would not entertain any petition that questions an election result or a winner declared by INEC unless the person announced as a winner is joined as a party. This is logical as the outcome of the petition affects the said winner declared by INEC. The winner’s joinder affords him/her an opportunity to defend his victory in line with the long-established principle of fair hearing.

Special tribunals and courts resolve election disputes

A distinctive feature of election petitions lies in the courts and tribunals with judicial powers to resolve election disputes. Election petitions are determined by election tribunals or courts vested with the authority to hear and determine cases within their jurisdictional competencies. The Constitution of the Federal Republic of Nigeria, 1999 (as altered) (CFRN) and Electoral Act 2022, establish the following tribunals and courts to resolve election disputes:

  • National Assembly and State Houses of Assembly Election Tribunals for each state of the federation and the FCT with authority to entertain petitions on National Assembly and House of Assembly elections (Section 285(1) CFRN)
  • Governorship Election Tribunal to hear and determine petitions for governorship elections (Section 285(2) CFRN)
  • Court of Appeal to adjudicate petitions against presidential elections (Section 239(1) CFRN)
  • Area Council Election Tribunal to resolve disputes related to the elections into the office of the Chairman and Councilors within the FCT. (Section 131(1) Electoral Act 2022)

As a matter of law, election petition tribunals are constituted not later than 30 days before an election holds. The Tribunal is required to open registries for business seven days before the election. These tribunals and courts can only resolve an election dispute if the law gives them the authority to do so. Without the legal power, any proceeding conducted by these tribunals or Courts will be an exercise in futility. An election tribunal or Court must fulfil certain conditions before it assumes jurisdiction to resolve an election dispute. First, Tribunal or Court must be properly constituted. Members of the panel should be duly qualified as prescribed by law. Secondly, the subject matter of the case is within the defined scope or powers of the Tribunal or Court. Lastly, due process is followed in initiating the case before a court, and all pre-conditions have been satisfied.

All timeframes are sacrosanct

The Constitution and Electoral Act make explicit provisions on the timeframe within which an aggrieved person can institute a legal case challenging the result of an election. The law also provides a timeline for the courts to determine an election petition. The Court will only entertain an election petition if the petition is filed within the timeframe prescribed by the law. The Petitioner intending to challenge an election result must file their petition within 21 days after the declaration of the election results. Filing a petition outside the fixed period renders it incompetent and strips the Tribunal of the jurisdiction to hear and determine the petition.

An election tribunal has 180 days from the date of filing to hear and determine an election petition. Any petition determined outside 180 days is invalid. An appeal against the decision of an Election Tribunal is also constitutionally timebound, as it is provided that any person displeased with the decision of the National/State Assembly or Governorship election tribunal must file a notice of appeal in the registry of the Tribunal or Court within 21 days from the decision date. An appeal against the decision of the Tribunal must be disposed of by the appellate courts (Court of Appeal and Supreme Court) within 60 days from the date of the delivery of judgment by the Tribunal or Court. In addition, appeals from the decision of the Court of Appeal to the Supreme Court shall be filed within 14 days from the date the decision appealed against was delivered. It takes approximately eight months to resolve a dispute on National/State Assembly elections, ten months in the case of a governorship election petition, and eight months to determine a presidential election petition. No matter the exigencies, or emergencies, the time fixed by the constitution to hear and determine election cases cannot be extended. This is intended to cure the mischief of the past where election petitions lasted for almost the term of office of the person whose election was questioned, thereby rendering the entire judicial process nearly fruitless.

Grounds for challenging an election must be recognized by law

Any individual or political party that intends to challenge or question the result of an election must ensure the petition is established on a valid ground or reason recognized by law. An election petition can only succeed with valid grounds recognized by the 1999 Constitution or 2022 Electoral Act. Section 134 of the Electoral Act 2022 lays out three grounds. They include:

  • Non-qualification: An election can be questioned if the person declared as a winner was not qualified to contest the election at the time of the election. Where a candidate fails to meet the criteria enshrined in the constitution, such a person is ineligible to contest an election. The requirements of citizenship, age (President 35yrs, Senate and Governors 35yrs, House of Reps and State assembly 25yrs), membership and sponsorship by a political party, and education qualification are the foundational criteria for running for office.
  • Corrupt practices and non-compliance: A petitioner must establish that the election was invalid by reason of corrupt practices or non-compliance with provisions of the Electoral Act, 2022. Corrupt practices include electoral offenses like election fraud, bribery, and falsification of election results. Non-compliance refers to outright violations of the Electoral Act, 2022 and INEC Guidelines, which confers an undue advantage to a candidate or party. A petitioner should avoid lumping corrupt practices and non-compliance together under one ground to avoid the attendant negative consequences.
  • Failure of the person declared a winner to score a majority of lawful votes:  Once the person initiating the petition can establish the candidate declared a winner of an election was not duly elected by the majority of lawful votes cast at the election, the election will be nullified. This ground relates to errors, computational accuracy in the collation of votes, exclusion of votes against the person filing the petition and that the person declared the winner fails to meet the legal requirement to be returned as a winner.

Tribunal judgments are appealable

Litigants who are dissatisfied with rulings delivered by election tribunals or courts of first instance can appeal such judgments as a matter of constitutional right. Appeals arising from the decision of the Court of Appeal in respect of a presidential election shall be heard by the Supreme Court, which is the Court of last resort. In contrast, appeals against the decision of a Governorship election tribunal lie to the Court of Appeal and from the Court of Appeal to the Supreme Court, which is the final arbiter. Lastly, appeals on National and State Assembly election tribunal judgments are filed at the Court of Appeal, the final Court for all appeals related to legislative elections.

While the recourse to an unelected body of judges to resolve election disputes signals increasing faith in the judicial process, it also exposes the desperation of politicians to exploit the litigation process to clinch electoral victory. Without good judges, the aspiration of advancing electoral justice and political legitimacy may be thwarted. Charles Evans Huges, in his presidential address to the American Bar Association, described a good judge as: -

“An honest, high-minded, able and fearless judge is, therefore, the most servant of democracy, for he illuminates justice as he interprets and applies the law; as he makes clear the benefits and shortcomings of the standards of individual and community rights amongst a free people."

This calls for courage on the part of the judiciary to assert itself as a fundamental pillar of democracy, insulate itself from the influence of politicians and uphold the rule of law to the highest standards in the interest of democracy. By so doing, the judiciary would be the true last hope of the common man.

May good judges rise when it matters most to enforce the will of the people expressed through the ballot box.

Samson Itodo is an election, democracy, and public policy enthusiast. Itodo serves as the Executive Director of Yiaga Africa. He is also a Board member of the Kofi Annan Foundation and the Board of Advisers of International IDEA. Please send comments and feedback to  [email protected] . He tweets @DSamsonItodo.

About the authors

write an essay about 2019 governorship tribunal

Gambian National Assembly adopts a code of conduct for members with European Union support

Kevin Casas-Zamora, Secretary-General of International IDEA and Bernardo Arévalo, President of Guatemala

The President of Guatemala receives the Secretary-General of International IDEA

Online briefing: European Parliament elections and (re) appointment of the top EU leadership: main trends, observations, and possible implications for Ukraine​

Reinforcing partnership with the Verkhovna Rada of Ukraine

Stoles of different Indian political parties on display in a shop in Mumbai. Stoles are popular among the political leaders and their supporters. Credit: @AlJazeeraEnglish on Flickr

Reducing the environmental impact of elections: Lessons from the Asia-Pacific

Cookie notice.

Our Cookies Policy and Privacy Policy have changed. Please read them to understand your rights and obligations, including how you can use our resources.

By continuing to use this site, without changing your settings, you are indicating that you accept this policy.

BarristerNG.com

One Stop Shop for Nigerian Law, Lawyers, Politics, News and Events

  • News / Opinion and Commentaries

Imo State Governorship Election Petitions: Revisiting and Reversing of Supreme Court’s  Judgement and the Time Limits for Filling and Conclusion of Election Petitions – By Aidomokhai Cyril Longe

by Bridget Edokwe · February 1, 2020

write an essay about 2019 governorship tribunal

Introduction:

The judgement of Imo State governorship election petitions between Emeka Ihedioha and Hope Uzodima by Supreme Court has received wide criticisms from the public and the members of the People’s Democratic Party (PDP) in particular to the extent of leading to protest. Many Lawyers including PDP sympathizers have advised the PDP and its candidate in 2019, Imo State Governorship Election, Emeke Ihedioha to approach the Supreme Court to review and reverse itself on its just-concluded judgement that upturned the results declared by INEC that saw Emeka Ihedioha as the winner of the 2019 governorship election in Imo State.

The emphasis of this discourse would be on the power of the Supreme Court to review and reverse its decision given in the same case, on an election petition.

As in the case of Hope Uzodima versus Emeke Ihedioha. Whereas, the Supreme Court’s judgement was highly condemned as miscarried. In a situation that saw the Supreme Court declaring results that were more than the total number of registered and accredited voters in Imo State Governorship Election. It is on this footing many people are calling for the review of the judgement. In view of the timeline provided for in filling and disposing of election petitions by the 1999 Constitution of Nigeria as Amended and the extant Electoral Act 2010 as Amended.

The Power Of The Supreme Court To Review And Reverse Its Judgement Given In The Same Case.

The Supreme Court is inherently empowered to set aside its judgment given in the same case when the judgment is obtained by fraud or deceit either on the Court or by one or more of the parties.

The judgment can be impeached or set aside by means of an action which may be given rise to without leave, or where the judgment is a nullity. A person affected by an order of the Court which can properly be described as a nullity is entitled ex debito justitiae to have it set aside, and when it is obvious that the Court was misled into giving such judgment under a mistaken belief that the parties had consented to it.

For instance, the case of Johnson v Lawanson (1971) 7 NSCC 82 is regarded as the trail-blazing case in which the Supreme Court exercised the power to overrule itself. Coker J.S.C. delivering the Court’s judgment held that “when the Court is faced with the alternative of perpetuating what it is satisfied to be an erroneous decision which was reached per incuriam and will, if followed, inflict hardship and injustice upon the generations in the future or of causing temporary disturbances of rights acquired under such a decision, I do not think we shall hesitate to declare the law as we find it.”

Again in Olorunfemi v Asho, the Supreme Court set aside its judgment delivered on January 8, 1999, on the ground that, it failed to consider the respondents cross-appeal before allowing the appellant’s appeal. The Court then ordered that the appeal be reheard de novo by another panel of Justices of the Supreme Court.

The Power Of The Supreme Court To Overrule Itself On A Prior Decisions

The Supreme Court in exercising its powers has developed a large body of judicial decisions, or “precedents,” interpreting the Constitution. How the Court uses precedent to decide controversial issues has prompted debate over whether the Court should follow rules identified in prior decisions or overrule them. The Court’s treatment of precedent implicates longstanding questions about how the Court can maintain stability in the law by adhering to precedent under the doctrine of stare decisis while correcting decisions that rest on faulty reasoning, unworkable standards, abandoned legal doctrines, or outdated factual assumptions.

Thus, the Supreme Court may revisit its judgement under Order 8 Rule 16, Supreme Court Rules to correct clerical errors or omissions or gaps to give meaning to the judgement of the Court.

We are final not because we are infallible, rather we are infallible because we are final – Late Justice Oputa.

The above quote by Late Justice Chikwudifu Oputa speaks much about the power of the Supreme Court to overrule itself on a prior judgement.

A case in point is the case of ADEGOKE MOTORS LTD V ADESANYA (1989) 13 NWLR (Pt. 109)

The Power Of The Supreme Court To Reverse Itself In The Same Case In An Election Petitions.

Election Petitions are “suis generis” meaning that they are specific, have a life of their own, have special character and are regulated by procedures.

The Constitutional provision as to the limitation of time within which election petitions and appeals therefrom must be filed and concluded has remained a dual-edged sword, as in the way and manner election petitions are given attention to in Nigerian Courts.

On the one hand, it is a salutary reform that cured the mischief of prolonged election petitions procedures that often enabled the beneficiaries of ‘stolen’ electoral mandate to hold political offices for several years before final judgment is secured, nullifying their elections and sacking them from the offices they fraudulently occupied.

On the other hand, the limitation of time prejudiced numerous meritorious election Petitions, which were unfortunately struck out for being choked by the time frame.

Section 285 (5 – 8) of the Constitution of the Federal Republic of Nigeria, 1999 (as amended) provides as follows: a. An election petition shall be filed within 21 days after the date of declaration of results of the election; b. An Election Tribunal shall deliver its judgment in writing within 180 days from the date of the filing of the petition; c. An appeal from a decision of the Election Tribunal or Court shall be heard and disposed within 60 days from the date of the delivering of the judgment of the Tribunal; d. The Court in all appeals from election Tribunals may adopt the practice of first giving its decision and reserving the reasons thereto for the decision to a later date.

The Supreme Court has leaned towards a very strict interpretation of the above Constitutional provisions brooking no discretion whatsoever on the part of the Court to extend any of the time limits under any circumstance.

Thus, in the case of ANPP V Goni the Supreme Court, per Rhodes-Vivour, JSC, left no one in doubt about its attitude to the Constitutional timeline for election petitions:

The period of 180 days is not limited to trials but also to de novo trials that may be ordered by an Appeal Court. Once an election petition is not concluded within 180 days from the date the petition was filed by the petitioner, an election Tribunal no longer has jurisdiction to hear the petition and this applies to rehear. The period of 180 days shall at all times be calculated from the date the petition was filed.

Still in the above case, the Supreme Court, per Onnoghen, JSC further opined and so held that: Courts do not have the vires to extend the time assigned by the Constitution. The time cannot be extended or expanded or elongated, or in any way enlarged. The time fixed by the Constitution is like the “Rock of Gibraltar” or Mount Zion, which cannot be moved. If what is to be done is not done within the time so fixed, it lapses as the Court is thereby robbed of the jurisdiction to continue to entertain the matter.

It was the same cerebral Onnoghen, JSC, who in the case of Felix Amadi & Anor. V INEC & Ors also foreclosed any hope of judicial magnanimity for enlargement of time in election petition appeals. He categorically pronounced that the time limit of 60 days for election petition appeals as provided in Section 285 (7) of the Constitution was sacrosanct. He reasoned that the obvious intendment of the Legislature in making that provision was to limit time and not to extend it. According to the Jurist, it would, therefore, be inappropriate and indeed illegal to interpret the provision to attain the effect of extending the time therein allotted.

The Supreme Court has shown great reluctant and as well total refusal to act as an Appeal Court over itself, with inundated appeals to review its decision given in the same case.

A good example is the case of Dr Andy Uba, who had earlier gone to the Supreme Court to ask for the revalidation of his alleged victory at the April 14, 2007 governorship election and return him to office after the Supreme Court threw out his case, approached a seven-man panel of the court to get the court to set aside its judgment which terminated his two weeks tenure as the Governor of Anambra State in 2007.

The Court in its ruling delivered by the then Chief Justice of Nigeria, Justice Idris Kutigi, observed that Dr Uba’s attempt at luring the court into setting aside its judgment which was delivered on June 14, 2007, was a gross abuse of the Court process and maintained that there must be an end to litigation.

This was again reaffirmed in the case of Prof. Steve Torkuma Ugba vs. Gabriel Torwua Suswam, where the issue for determination was whether, given the facts of the case, the applicants satisfied the conditions to warrant the Supreme Court to set aside its earlier ruling.

The advantage of this stance is that it fosters stability, enhances development of consistent and coherent body of laws, preserves continuity and manifests respect for the past, assures equality of treatment for litigants similarly situated, spares the judges the task of reexamining rules of law or principles with each succeeding case and finally, it affords the law a desirable measure of predictability.

Based on the foregoing, on the issue of Supreme Court to act as an Appeal Court over its judgement on Imo State Governorship Election Petition between Hope Uzodima and Emeka Ihedioaha. It is rightly so to note that what is to be disposed of has in a way hit the rock and can go nowhere else.

It would be informed to note that the Court of Appeal reached its judgement on Imo State Governorship Election Petitions on the 19th of November 2019. While the Supreme Court, in turn, disposed of the appeals on the 14th of January 2020

Going by computation of time, the timeline for filling and concluding an election petition has elapsed.

It is in order for one to conclude that the 2019 Imo State Governorship Election Petition has gotten to its final judicial junction.

This is supported by the decision of the Supreme Court in an Election Petition case between PDP and GOVERNOR OKOROCHA & 10 OTHERS in Suit No. SC/17/2012 delivered on 2nd March 2012, where Odili, JSC, held in a most profound manner and characteristic candour as follows-

“What is to be disposed of has in a way hit the rock and can go nowhere else. That is to say, the legal dispute or process has reached its final destination and is at grand finale”.

The advice that the PDP and its candidate to approach the Supreme Court for reviewing its judgement can be said is belated and go contrary to the Constitution of Nigeria.

Aidomokhai Cyril Longe is a Law Graduate. Write from Lagos.

write an essay about 2019 governorship tribunal

To subscribe to Primsol, go to store.lawpavilion.com .

For further enquiries/assistance, send an email to customercare@lawpavilion.com or call 08050298729

write an essay about 2019 governorship tribunal

Call Bridget Edokwe Esq on 08060798767 or send your email to barristerngblog@gmail.com

write an essay about 2019 governorship tribunal

The cost of the book is Seven Thousand Naira (N7,000) only. Call 08037667945 OR 08028636615 ;  or Email: princetonedu2012@yahoo.com ; bookpublishing2017@gmail.com; princetonpub.com  to get your copy.

write an essay about 2019 governorship tribunal

Digital Evidence and eDiscovery Law Practice in Nigeria -By Emeka Arinze Esq . [ORDER NOW] – For book cost & placing order, visit www.decfi.com.ng/order

Steps to subscribe to the court of appeal reports nigeria.

write an essay about 2019 governorship tribunal

Get ‘Personal Property Law in Nigeria’ By Chief Mike A.A. Ozekhome, SAN (FREE)

Click below to download FREE

https://www.pulp.up.ac.za/monographs/personal-property-law-in-nigeria

ADVERTISEMENT

BESTSELLER: Commercial and Economic Law in Nigeria By Chief Mike Ozekhome, SAN [ORDER NOW]

write an essay about 2019 governorship tribunal

To ORDER click the link

https://kluwerlawonline.com/EncyclopediaChapter/IEL+Commercial+and+Economic+Law/COMM20210001

Share this:

Tags: BarristerNG

  • Next story  Jalingo Bar Vice Chairperson Kidnapped
  • Previous story  UK formally Leaves European Union

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Currently you have JavaScript disabled. In order to post comments, please make sure JavaScript and Cookies are enabled, and reload the page. Click here for instructions on how to enable JavaScript in your browser.

write an essay about 2019 governorship tribunal

Bar News / News

Bridget Edokwe’s Goodwill Message to NBA Effurun Branch on the Occasion of her 2024 Law Week

June 8, 2024

write an essay about 2019 governorship tribunal

Bar News / Lawyers / News

NBA Yola Branch Publicity Secretary and Image Maker, Mr. Promise Rini-Ose Ajumebor, Picks Nomination Form for the Position of Branch Secretary

write an essay about 2019 governorship tribunal

Solar For All: Otunba Dr.Dada Awosika SAN SAN is our “Star Client of the week”

write an essay about 2019 governorship tribunal

Abuja Lawyers League holds closing Ceremony, 8th June

June 7, 2024

write an essay about 2019 governorship tribunal

Delta High Court Complex: A Dream Come True to State Judiciary – Nwadimuya Esq.

write an essay about 2019 governorship tribunal

Remuneration and Enhanced Earning Capacity: Tobenna Erojikwe Harps on the Need to Enforce Provisions of the LPA, the Local Content Act, at the NBA Osogbo Branch

write an essay about 2019 governorship tribunal

Suit Seeking to Bar Nigeria’s President from Foreign Medical Travels Dismissed for Lack of Locus Standi and Evidence

write an essay about 2019 governorship tribunal

NBA Okrika 117: Reawakened, Tamunosiki Akuro Roberts emerge as Chairman

write an essay about 2019 governorship tribunal

Nation Cannot Coexist Peacefully Without Administration Of Justice — NBA

June 6, 2024

write an essay about 2019 governorship tribunal

Advocating for Health Insurance: Chief Emeka Ozoani, SAN, Speaks at NBA Ota Monthly Meeting Donates 4 Million Nair

June 5, 2024

Follow us on Twitter

Tweets by BarristerNG1

Follow Us on Facebook

Barristerng
  • Commentaries
  • Business News
  • lawyers Events

write an essay about 2019 governorship tribunal

Tribunal Judgment: An Analysis Of Election Appeals Procedure In Nigeria

write an essay about 2019 governorship tribunal

By Festus Ogun, Esq

The Presidential Election Tribunal sitting in Abuja delivered its judgment today, the 6 th  of September, 2023. Specifically, the Court of Appeal finally determined the petitions separately filed by the Allied Peoples Movement (APM) and Labour Party (LP) against Asiwaju Bola Ahmed Tinubu and the All Progressives Congress (APC), the party declared by the Independent National Electoral Commission (INEC) as the winner of the 2023 Presidential Election.

Similarly, other Election Tribunals in respect of National and State Houses of Assembly and Governorship elections are already delivering final judgments in petitions before them. Therefore, this article shall briefly discuss the right of appeal of those whose Petitions were dismissed (“the appellants”) and salient procedural rules that must be religiously complied with.

RIGHT OF APPEAL

The law is trite that a party affected by a judgment or who has a benefit to derive from the subject matter of a case may appeal any decision of a court or tribunal. Thus, a party to an Election Petition, inclusive of INEC in deserving cases, has the inherent right to lodge an appeal against the decision of the Election Tribunal. See:  INEC v NYAKO (2011) 12 NWLR (Pt. 1262) 439, 539.  For Presidential election petition appeals, the Supreme Court has the final say. For Governorship election appeal, it travels from the Tribunal through the Court of Appeal to the Supreme Court. However, in respect of National and State Houses of Assembly, the right to appeal terminates at the Court of Appeal.

The originating process (first court process to be filed) in an appellate proceeding is the “Notice of Appeal” which primarily contains the grounds of appeal and the particulars in support. It is also very instructive to note that leave is not required before the filing of a notice of appeal in election petition appeals.

TIMEFRAME FOR FILING OF APPEAL

Generally, by virtue of Section 285 (7) of the 1999 Constitution of the Federal Republic of Nigeria, an appeal from the decision of the Election Tribunal or Court of Appeal shall be heard and determined within 60 days from the date of the delivery of judgment at the tribunal or court of appeal.

Section 6 of the Election Judicial Proceedings Practice Directions, 2022 (the Practice Direction issued by the Court of Appeal) provides that a Notice of Appeal shall be filed at the Registry of the Tribunal or Court of Appeal within 21 days in respect of a final decision and 14 days in respect of interlocutory rulings. Since this article is in respect of final judgments, the appellants, by virtue of this Practice Direction, have 21 days from the date of judgment to file their Notices of Appeal.

On the flip side, Section 2 of the Supreme Court Election Appeals Practice Directions, 2023 stipulates that an appellant shall file in the Registry of the Court of Appeal, notice and grounds of appeal,  within 14 days from the date of delivery of judgment  appealed against. It is, therefore, safer to file an appeal to the Supreme Court within 14 days from the date of delivery of judgment appealed against.

Failure to file Notice of Appeal within the prescribed days from the date of judgment renders the Appeal incompetent and shall be struck out by the Court. See:  ANPP V. GONI (2021) 7 NWLR (PT. 1298) 147 @ 182; SIJUWADE V. OYEMOLE (2010) ALL FWLR (PT. 513) 1407; OKOREAFFIA v. AGWU (2010) LPELR-8654(CA)  Similarly, a judgment delivered outside the 60 days stipulated by the Constitution shall be a nullity.

COMPILATION, TRANSMISSION AND SERVICE OF RECORDS

Upon filing of the Notice of Appeal, the Appellant shall pay prescribed fee for the compilation of the Record of Appeal and furnish as many copies as there are Respondents and 10 extra copies for the Secretary of Tribunal. Further, the Appellant shall pay a prescribed fee for the compilation of Records of Appeal and service of same on all the Respondents.

Within 10 days of filing the Notice of Appeal, the Secretary of the Tribunal shall compile the records of appeal and serve same on the all parties. It is important to add that by Section 18 of the Practice Direction, the compilation, transmission, filing and service of all processes in respect of an appeal shall be done electronically as stipulated under the Court of Appeal Rules, 2021. Failure to compile and transmit the records within the stipulated days renders the appeal as abandoned which may put the appeal under the inescapable fate of outright dismissal.

BRIEF OF ARGUMENT

The provision of Sections 10-15 of the Practice Directions is that within 7 days after the service of the Records of Appeal, the Appellant shall file his Brief of Argument at the court of Appeal. The same rule applies to the Supreme Court. The Respondent shall within 5 days of service of the Appellant’s Brief of Argument file his Brief of Argument. If and where necessary, the Appellant may file a Reply Brief within 2 days of the service of the Respondent’s Brief of Argument. However, under the Supreme Court Election Appeals Practice Direction, 2023, an Appellant may file a reply brief within  3 days of the service of respondent’s brief.

Every Brief of Argument, either of the Appellant or Respondent, shall not exceed 40 pages. Additionally, the Reply Brief shall not exceed 15 pages; under the Supreme Court Election Appeals Practice Direction, 2023, a Reply brief shall not exceed 10 pages. The paper upon which the processes are printed shall be in 210mm by 297mm paper size (A4) and shall be typed in either Arial, Times New Roman or Tahoma with font size 14 and 1.5 line spacing. The Supreme Court Election Appeals Practice Direction, 2023 recognizes Verdana font and excludes Tahoma.

Instructively, Section 14(c) of the Court of Appeal Practice Judicial Proceedings Practice Direction provides that failure to comply with the provision above renders the violating brief invalid.

At the earliest date before the date set down for the hearing of the Appeal, the party who has filed a Brief of Argument or his counsel shall forward to the Registrar of the Court of Appeal a list of the law reports, textbooks and other authorities which Counsel intend to rely on at the hearing.

HEARING OF APPEAL

By Section 16 of the Practice Direction, oral adumberation will be allowed at the hearing of Appeal and 15 minutes shall be allowed for the arguments of each party, unless otherwise directed.  At the Supreme Court, except directed otherwise, only 10 minutes are slated for adumberation.

IMPORTANCE OF ADHERENCE TO PRACTICE DIRECTION ESPECIALLY IN RESPECT OF APPEALS

Exercising the powers conferred upon them by the Constitution, the Court of Appeal Act, the Supreme Court Act and the Electoral Act, 2022 the President of the Court of Appeal issued the Judicial Proceedings Practice Directions, 2022 and the Chief Justice of Nigeria issued the Supreme Court Election Appeals Practice Directions, 2023 – both in respect of election related matters, including appeals.

In  OWURU v. AWUSE & ORS (2004) LPELR-7339(CA),  the Court of Appeal emphasized that the Practice Directions have constitutional backing and must be strictly obeyed   and cannot be circumvented.   Our courts have consistently not shown any favour to any party not obeying the provision of the Practice Direction and this is usually because of the sui-generis nature of election petition. Put simply, failure to comply strictly with the provisions of the Practice Direction could be very fatal and may vitiate the entire case of the defaulting party. See:  OJUGBELE Vs LAMIDI (1999) 10 NWLR (Pt 621) 167

It is not acceptable to resort to blackmailing or attacking the hallowed judiciary when judgments are not delivered in favour of a party. What is expected is for the parties to carefully study the judgment and take a position on whether to proceed on appeal or not. Importantly, so as to avoid being knocked out on the basis of barren technicalities, it is very crucial for lawyers and litigants alike to play strictly by the rules of law, inclusive of the provisions of the Practice Direction, in respect of election petition appeals. Needless to say that election petitions are sui generis and procedural errors might be very costly. This author has seen many cases where decisions of election Tribunals are upturned on appeal even in somewhat unlikely situations. However, these happen where, beyond the merit of the appeal, the appellants diligently abide by the established rules of election judicial proceedings.

Festus Ogun is a constitutional lawyer and Managing Partner at FOLEGAL, Lagos. He can be reached via  [email protected]   09066324982.

write an essay about 2019 governorship tribunal

NIALS' Compendia Series: Your One-Stop Solution For Navigating Nigerian Laws (2004-2023)

write an essay about 2019 governorship tribunal

UNI LAW FACULTIES

write an essay about 2019 governorship tribunal

Gov. Sanwo-Olu Rewards LASU’s Overall Best Graduating Student With ₦10 Million

write an essay about 2019 governorship tribunal

NUC Warns Against Inexperienced Appointments, Family-Run Operations In Private Universities

write an essay about 2019 governorship tribunal

“Prove Your Identity”: JAMB Demands Explanation For Name Changes From Direct...

write an essay about 2019 governorship tribunal

Chukwuka Ikwuazom, SAN – The Candidate All Branches Need To Support...

write an essay about 2019 governorship tribunal

How Mazi Afam Osigwe, (SAN) Endeared Himself To NBA Anaocha Branch.

write an essay about 2019 governorship tribunal

Lawyer Faults Labour’s Extreme Action Of Shutting National Grid During Strike

write an essay about 2019 governorship tribunal

NBA Osogbo Branch Inspired By Tobenna Erojikwe’s Insights To Protect Lawyers’...

Lawyers events.

write an essay about 2019 governorship tribunal

On World Environment Day: International Law Experts Call For Renewed Commitment...

write an essay about 2019 governorship tribunal

ICMC Hosts Free Webinar On Mediation As A Preventive Measure For...

write an essay about 2019 governorship tribunal

Advance Your Career: NIALS & ILA Offer Intensive Certificate Course in...

Global site navigation

  • Celebrity biographies
  • Messages - Wishes - Quotes
  • TV-shows and movies
  • Fashion and style
  • Capital Market
  • Celebrities
  • Family and Relationships

Local editions

  • Legit Nigeria News
  • Legit Hausa News
  • Legit Spanish News
  • Legit French News

1. Osun state

As reported by PM News , the appeal court sitting in Ibadan, Oyo state, in November 2010, sacked Olagunsoye Oyinlola, as Osun state governor and declared Rauf Aregbesola of the Action Congress of Nigeria (ACN) winner of the 2007 governorship election in the state.

write an essay about 2019 governorship tribunal

Tribunal: New twist as court admit Peter Obi's video evidence, playback slated for Saturday, June 10

PAY ATTENTION : Follow us on Instagram - get the most important news directly in your favourite app!

In a unanimous judgement, the five-member appeal panel led by Justice Clara Ogunbiyi declared Aregbesola the winner and ordered that he should be sworn in immediately.

2. Ondo state

According to SaharaReporters , in August 2008, the Ondo state elections petitions tribunal ordered that Labour Party governorship candidate, Olusegun Mimiko, should be sworn in immediately as the state governor because he won the valid votes in 12 out of the 18 local governments.

The five-man tribunal, led by Garba Nabaruma, gave the judgement after it nullified the election that produced Governor Olusegun Agagu.

Agagu appealed the verdict, and in February 2009, the appeal court in Benin City, Edo state, upheld the ruling of the electoral tribunal, stating that Agagu did not win the April 2007 governorship election.

3. Edo state

write an essay about 2019 governorship tribunal

Senate presidency: Disparities over Muslim dominance in Tinubu's govt as tribal leaders reject Yari

The chairman of the Edo state governorship petition tribunal, Justice Peter Umeadi, in Benin City, in March 2008, ruled that Adams Oshiomhole of the Action Congress (AC) had proved his allegations of fraud , voter intimidation, multiple voter registration, over-voting and election violence.

The tribunal, therefore, held that the results were adversely affected and declared Oshiomhole governor of Edo State.

The appeal court in November 2008 upheld the judgment of the state elections tribunal that annulled Oserheimen Osunbor of the Peoples Democratic Party (PDP)’s victory.

The Court ruled that Osunbor did not get the most votes in the April 2007 vote and was unlawfully elected in a flawed and chaotic election.

Justice Umaru Abdullahi, president of the court said:

“It is clear that the appeal lacks merit and is hereby dismissed ... Oshiomhole is hereby declared the lawfully elected governor.”

4. Imo state

Residents of Imo state got a happy new year judgement from the supreme court in January 2020, when a seven-man panel led by Justice Kudirat Kekere-Ekun, declared Hope Uzodimma of the Peter as winner of the gubernatorial election.

write an essay about 2019 governorship tribunal

Wike reacts as reports claim Tinubu did not win 2023 presidential election in Rivers state

As reported by TheCable , the apex court sacked Emeka Ihedioha as governor of Imo state, stating that the PDP candidate did not win majority of votes cast in the 2019 election.

“The votes due to the appellant, Hope Uzodinma and the All Progressives Congress (APC) from 388 polling units were wrongly excluded from scores ascribed to them,” she held.
“It is hereby ordered that Emeka Ihedioha, was not duly elected by majority of lawful votes cast at the said election. His return as the elected governor of Imo state is hereby declared null and void and accordingly set aside."

5. Ekiti state

Segun Oni was sacked by the appeal court sitting in Ilorin, Kwara state, in October 2010 and ordered that the Action Congress candidate, Kayode Fayemi, be sworn in in his place.

President of the court, Justice Ayo Salami, leading other four judges, said Fayemi defeated Oni by 10,965 majority votes, contrary to earlier verdict by INEC and the lower court, PM News reports.

write an essay about 2019 governorship tribunal

Police seal off house of assembly in northern state, give reason for action

The judge added the court voided the votes recorded for Oni in Ijero and Ido-Osi local council areas, for not complying with the electoral act.

The court ordered INEC to withdraw the certificate of return earlier given to Oni and issue a new one proclaiming Fayemi the lawfully elected governor.

6. Bayelsa state

The appeal court sitting in Port Harcourt, Rivers state, in April 2008, ruled that there was no election in Bayelsa state and ordered a fresh election.

The court made the pronouncement after Ebitimi Amgbare of the Action Congress of Nigeria (ACN) approached the court to challenge the victory of Timipre Sylva of the Peoples Democratic Party (PDP)

Sylva, however, won the fresh election and was returned elected as governor of Bayelsa state.

7. Anambra state

Peter Obi of the All Progressive Grand Alliance party (APGA) was the first to get a positive verdict from the tribunal when the Court of Appeal sitting in Enugu, in March 2006, removed Chris Ngige as governor of Anambra state.

write an essay about 2019 governorship tribunal

Breaking: Tension In Nasarawa as 2 speakers emerge from state assembly

The court declared Peter Obi the winner. The court upheld the ruling of the Anambra State Election Petition Tribunal.

8. Kogi state

The Kogi State Election Petitions Tribunal nullified the April 2007 gubernatorial election after the All Nigeria Peoples Party (ANPP) candidate, Prince Abubakar Audu, accused INEC of wrongly excluding him from taking part in the election.

As reported by SaharaReporters , Ibrahim Idris of the PDP appealed the tribunal judgement, but the Court of Appeal reaffirmed the election petition tribunal's verdict.

On February 6, 2008, a court of appeal nullified the April 14, 2007 election and ordered that a fresh election be conducted.

Idris won the fresh election on March 29, 2008, and was returned as the governor of Kogi state.

It will continue until the constitution is amended

An Abuja-based lawyer, Chinedu Onuoha, said the pattern would continue until the constitution is amended.

Onuoha argued that the constitution already envisage these situations when it stipulates in section 285 (6) that the tribunal is bound to deliver its judgement in writing within 180 days from the date of the filing of the petition, which means that the declared winner would have been sworn in before the expiration of the 180 days.

write an essay about 2019 governorship tribunal

Ogun Tribunal: Thugs invade, flog PDP guber aspirants, others

"If we don't alter the constitution, there's nothing we can do about it. It will continue like this because of the 180 days stipulated by the constitution.
"It means the constitution already envisage a situation when someone will be sworn in when the matter is still in court.
"The process of even changing the constitution is cumbersome. It's a complex produce to change the constitution."

INEC releases final list of governorship candidates for Kogi, Bayelsa, Imo elections

The Independent National Electoral Commissioner (INEC) on Tuesday, June 6, released the final list of candidates for the November 11 governorship elections in Kogi, Bayelsa, and Imo states.

INEC national commissioner and chairman of the Information and Voter Education Committee, Festus Okoye, made this known in a statement released on Tuesday.

Source: Legit.ng

Adekunle Dada (Politics and Current Affairs Editor) Adekunle Dada is a journalist with over 5 years of working experience in the media. He has worked with PM News, The Sun and Within Nigeria before joining Legit.ng as a Politics/Current Affairs Editor. He holds a B.Sc. in Mass Communication from Lagos State University (LASU). He can be reached via [email protected].

Online view pixel

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 03 June 2024

Applying large language models for automated essay scoring for non-native Japanese

  • Wenchao Li 1 &
  • Haitao Liu 2  

Humanities and Social Sciences Communications volume  11 , Article number:  723 ( 2024 ) Cite this article

185 Accesses

2 Altmetric

Metrics details

  • Language and linguistics

Recent advancements in artificial intelligence (AI) have led to an increased use of large language models (LLMs) for language assessment tasks such as automated essay scoring (AES), automated listening tests, and automated oral proficiency assessments. The application of LLMs for AES in the context of non-native Japanese, however, remains limited. This study explores the potential of LLM-based AES by comparing the efficiency of different models, i.e. two conventional machine training technology-based methods (Jess and JWriter), two LLMs (GPT and BERT), and one Japanese local LLM (Open-Calm large model). To conduct the evaluation, a dataset consisting of 1400 story-writing scripts authored by learners with 12 different first languages was used. Statistical analysis revealed that GPT-4 outperforms Jess and JWriter, BERT, and the Japanese language-specific trained Open-Calm large model in terms of annotation accuracy and predicting learning levels. Furthermore, by comparing 18 different models that utilize various prompts, the study emphasized the significance of prompts in achieving accurate and reliable evaluations using LLMs.

Similar content being viewed by others

write an essay about 2019 governorship tribunal

Accurate structure prediction of biomolecular interactions with AlphaFold 3

write an essay about 2019 governorship tribunal

Testing theory of mind in large language models and humans

write an essay about 2019 governorship tribunal

Highly accurate protein structure prediction with AlphaFold

Conventional machine learning technology in aes.

AES has experienced significant growth with the advancement of machine learning technologies in recent decades. In the earlier stages of AES development, conventional machine learning-based approaches were commonly used. These approaches involved the following procedures: a) feeding the machine with a dataset. In this step, a dataset of essays is provided to the machine learning system. The dataset serves as the basis for training the model and establishing patterns and correlations between linguistic features and human ratings. b) the machine learning model is trained using linguistic features that best represent human ratings and can effectively discriminate learners’ writing proficiency. These features include lexical richness (Lu, 2012 ; Kyle and Crossley, 2015 ; Kyle et al. 2021 ), syntactic complexity (Lu, 2010 ; Liu, 2008 ), text cohesion (Crossley and McNamara, 2016 ), and among others. Conventional machine learning approaches in AES require human intervention, such as manual correction and annotation of essays. This human involvement was necessary to create a labeled dataset for training the model. Several AES systems have been developed using conventional machine learning technologies. These include the Intelligent Essay Assessor (Landauer et al. 2003 ), the e-rater engine by Educational Testing Service (Attali and Burstein, 2006 ; Burstein, 2003 ), MyAccess with the InterlliMetric scoring engine by Vantage Learning (Elliot, 2003 ), and the Bayesian Essay Test Scoring system (Rudner and Liang, 2002 ). These systems have played a significant role in automating the essay scoring process and providing quick and consistent feedback to learners. However, as touched upon earlier, conventional machine learning approaches rely on predetermined linguistic features and often require manual intervention, making them less flexible and potentially limiting their generalizability to different contexts.

In the context of the Japanese language, conventional machine learning-incorporated AES tools include Jess (Ishioka and Kameda, 2006 ) and JWriter (Lee and Hasebe, 2017 ). Jess assesses essays by deducting points from the perfect score, utilizing the Mainichi Daily News newspaper as a database. The evaluation criteria employed by Jess encompass various aspects, such as rhetorical elements (e.g., reading comprehension, vocabulary diversity, percentage of complex words, and percentage of passive sentences), organizational structures (e.g., forward and reverse connection structures), and content analysis (e.g., latent semantic indexing). JWriter employs linear regression analysis to assign weights to various measurement indices, such as average sentence length and total number of characters. These weights are then combined to derive the overall score. A pilot study involving the Jess model was conducted on 1320 essays at different proficiency levels, including primary, intermediate, and advanced. However, the results indicated that the Jess model failed to significantly distinguish between these essay levels. Out of the 16 measures used, four measures, namely median sentence length, median clause length, median number of phrases, and maximum number of phrases, did not show statistically significant differences between the levels. Additionally, two measures exhibited between-level differences but lacked linear progression: the number of attributives declined words and the Kanji/kana ratio. On the other hand, the remaining measures, including maximum sentence length, maximum clause length, number of attributive conjugated words, maximum number of consecutive infinitive forms, maximum number of conjunctive-particle clauses, k characteristic value, percentage of big words, and percentage of passive sentences, demonstrated statistically significant between-level differences and displayed linear progression.

Both Jess and JWriter exhibit notable limitations, including the manual selection of feature parameters and weights, which can introduce biases into the scoring process. The reliance on human annotators to label non-native language essays also introduces potential noise and variability in the scoring. Furthermore, an important concern is the possibility of system manipulation and cheating by learners who are aware of the regression equation utilized by the models (Hirao et al. 2020 ). These limitations emphasize the need for further advancements in AES systems to address these challenges.

Deep learning technology in AES

Deep learning has emerged as one of the approaches for improving the accuracy and effectiveness of AES. Deep learning-based AES methods utilize artificial neural networks that mimic the human brain’s functioning through layered algorithms and computational units. Unlike conventional machine learning, deep learning autonomously learns from the environment and past errors without human intervention. This enables deep learning models to establish nonlinear correlations, resulting in higher accuracy. Recent advancements in deep learning have led to the development of transformers, which are particularly effective in learning text representations. Noteworthy examples include bidirectional encoder representations from transformers (BERT) (Devlin et al. 2019 ) and the generative pretrained transformer (GPT) (OpenAI).

BERT is a linguistic representation model that utilizes a transformer architecture and is trained on two tasks: masked linguistic modeling and next-sentence prediction (Hirao et al. 2020 ; Vaswani et al. 2017 ). In the context of AES, BERT follows specific procedures, as illustrated in Fig. 1 : (a) the tokenized prompts and essays are taken as input; (b) special tokens, such as [CLS] and [SEP], are added to mark the beginning and separation of prompts and essays; (c) the transformer encoder processes the prompt and essay sequences, resulting in hidden layer sequences; (d) the hidden layers corresponding to the [CLS] tokens (T[CLS]) represent distributed representations of the prompts and essays; and (e) a multilayer perceptron uses these distributed representations as input to obtain the final score (Hirao et al. 2020 ).

figure 1

AES system with BERT (Hirao et al. 2020 ).

The training of BERT using a substantial amount of sentence data through the Masked Language Model (MLM) allows it to capture contextual information within the hidden layers. Consequently, BERT is expected to be capable of identifying artificial essays as invalid and assigning them lower scores (Mizumoto and Eguchi, 2023 ). In the context of AES for nonnative Japanese learners, Hirao et al. ( 2020 ) combined the long short-term memory (LSTM) model proposed by Hochreiter and Schmidhuber ( 1997 ) with BERT to develop a tailored automated Essay Scoring System. The findings of their study revealed that the BERT model outperformed both the conventional machine learning approach utilizing character-type features such as “kanji” and “hiragana”, as well as the standalone LSTM model. Takeuchi et al. ( 2021 ) presented an approach to Japanese AES that eliminates the requirement for pre-scored essays by relying solely on reference texts or a model answer for the essay task. They investigated multiple similarity evaluation methods, including frequency of morphemes, idf values calculated on Wikipedia, LSI, LDA, word-embedding vectors, and document vectors produced by BERT. The experimental findings revealed that the method utilizing the frequency of morphemes with idf values exhibited the strongest correlation with human-annotated scores across different essay tasks. The utilization of BERT in AES encounters several limitations. Firstly, essays often exceed the model’s maximum length limit. Second, only score labels are available for training, which restricts access to additional information.

Mizumoto and Eguchi ( 2023 ) were pioneers in employing the GPT model for AES in non-native English writing. Their study focused on evaluating the accuracy and reliability of AES using the GPT-3 text-davinci-003 model, analyzing a dataset of 12,100 essays from the corpus of nonnative written English (TOEFL11). The findings indicated that AES utilizing the GPT-3 model exhibited a certain degree of accuracy and reliability. They suggest that GPT-3-based AES systems hold the potential to provide support for human ratings. However, applying GPT model to AES presents a unique natural language processing (NLP) task that involves considerations such as nonnative language proficiency, the influence of the learner’s first language on the output in the target language, and identifying linguistic features that best indicate writing quality in a specific language. These linguistic features may differ morphologically or syntactically from those present in the learners’ first language, as observed in (1)–(3).

我-送了-他-一本-书

Wǒ-sòngle-tā-yī běn-shū

1 sg .-give. past- him-one .cl- book

“I gave him a book.”

Agglutinative

彼-に-本-を-あげ-まし-た

Kare-ni-hon-o-age-mashi-ta

3 sg .- dat -hon- acc- give.honorification. past

Inflectional

give, give-s, gave, given, giving

Additionally, the morphological agglutination and subject-object-verb (SOV) order in Japanese, along with its idiomatic expressions, pose additional challenges for applying language models in AES tasks (4).

足-が 棒-に なり-ました

Ashi-ga bo-ni nar-mashita

leg- nom stick- dat become- past

“My leg became like a stick (I am extremely tired).”

The example sentence provided demonstrates the morpho-syntactic structure of Japanese and the presence of an idiomatic expression. In this sentence, the verb “なる” (naru), meaning “to become”, appears at the end of the sentence. The verb stem “なり” (nari) is attached with morphemes indicating honorification (“ます” - mashu) and tense (“た” - ta), showcasing agglutination. While the sentence can be literally translated as “my leg became like a stick”, it carries an idiomatic interpretation that implies “I am extremely tired”.

To overcome this issue, CyberAgent Inc. ( 2023 ) has developed the Open-Calm series of language models specifically designed for Japanese. Open-Calm consists of pre-trained models available in various sizes, such as Small, Medium, Large, and 7b. Figure 2 depicts the fundamental structure of the Open-Calm model. A key feature of this architecture is the incorporation of the Lora Adapter and GPT-NeoX frameworks, which can enhance its language processing capabilities.

figure 2

GPT-NeoX Model Architecture (Okgetheng and Takeuchi 2024 ).

In a recent study conducted by Okgetheng and Takeuchi ( 2024 ), they assessed the efficacy of Open-Calm language models in grading Japanese essays. The research utilized a dataset of approximately 300 essays, which were annotated by native Japanese educators. The findings of the study demonstrate the considerable potential of Open-Calm language models in automated Japanese essay scoring. Specifically, among the Open-Calm family, the Open-Calm Large model (referred to as OCLL) exhibited the highest performance. However, it is important to note that, as of the current date, the Open-Calm Large model does not offer public access to its server. Consequently, users are required to independently deploy and operate the environment for OCLL. In order to utilize OCLL, users must have a PC equipped with an NVIDIA GeForce RTX 3060 (8 or 12 GB VRAM).

In summary, while the potential of LLMs in automated scoring of nonnative Japanese essays has been demonstrated in two studies—BERT-driven AES (Hirao et al. 2020 ) and OCLL-based AES (Okgetheng and Takeuchi, 2024 )—the number of research efforts in this area remains limited.

Another significant challenge in applying LLMs to AES lies in prompt engineering and ensuring its reliability and effectiveness (Brown et al. 2020 ; Rae et al. 2021 ; Zhang et al. 2021 ). Various prompting strategies have been proposed, such as the zero-shot chain of thought (CoT) approach (Kojima et al. 2022 ), which involves manually crafting diverse and effective examples. However, manual efforts can lead to mistakes. To address this, Zhang et al. ( 2021 ) introduced an automatic CoT prompting method called Auto-CoT, which demonstrates matching or superior performance compared to the CoT paradigm. Another prompt framework is trees of thoughts, enabling a model to self-evaluate its progress at intermediate stages of problem-solving through deliberate reasoning (Yao et al. 2023 ).

Beyond linguistic studies, there has been a noticeable increase in the number of foreign workers in Japan and Japanese learners worldwide (Ministry of Health, Labor, and Welfare of Japan, 2022 ; Japan Foundation, 2021 ). However, existing assessment methods, such as the Japanese Language Proficiency Test (JLPT), J-CAT, and TTBJ Footnote 1 , primarily focus on reading, listening, vocabulary, and grammar skills, neglecting the evaluation of writing proficiency. As the number of workers and language learners continues to grow, there is a rising demand for an efficient AES system that can reduce costs and time for raters and be utilized for employment, examinations, and self-study purposes.

This study aims to explore the potential of LLM-based AES by comparing the effectiveness of five models: two LLMs (GPT Footnote 2 and BERT), one Japanese local LLM (OCLL), and two conventional machine learning-based methods (linguistic feature-based scoring tools - Jess and JWriter).

The research questions addressed in this study are as follows:

To what extent do the LLM-driven AES and linguistic feature-based AES, when used as automated tools to support human rating, accurately reflect test takers’ actual performance?

What influence does the prompt have on the accuracy and performance of LLM-based AES methods?

The subsequent sections of the manuscript cover the methodology, including the assessment measures for nonnative Japanese writing proficiency, criteria for prompts, and the dataset. The evaluation section focuses on the analysis of annotations and rating scores generated by LLM-driven and linguistic feature-based AES methods.

Methodology

The dataset utilized in this study was obtained from the International Corpus of Japanese as a Second Language (I-JAS) Footnote 3 . This corpus consisted of 1000 participants who represented 12 different first languages. For the study, the participants were given a story-writing task on a personal computer. They were required to write two stories based on the 4-panel illustrations titled “Picnic” and “The key” (see Appendix A). Background information for the participants was provided by the corpus, including their Japanese language proficiency levels assessed through two online tests: J-CAT and SPOT. These tests evaluated their reading, listening, vocabulary, and grammar abilities. The learners’ proficiency levels were categorized into six levels aligned with the Common European Framework of Reference for Languages (CEFR) and the Reference Framework for Japanese Language Education (RFJLE): A1, A2, B1, B2, C1, and C2. According to Lee et al. ( 2015 ), there is a high level of agreement (r = 0.86) between the J-CAT and SPOT assessments, indicating that the proficiency certifications provided by J-CAT are consistent with those of SPOT. However, it is important to note that the scores of J-CAT and SPOT do not have a one-to-one correspondence. In this study, the J-CAT scores were used as a benchmark to differentiate learners of different proficiency levels. A total of 1400 essays were utilized, representing the beginner (aligned with A1), A2, B1, B2, C1, and C2 levels based on the J-CAT scores. Table 1 provides information about the learners’ proficiency levels and their corresponding J-CAT and SPOT scores.

A dataset comprising a total of 1400 essays from the story writing tasks was collected. Among these, 714 essays were utilized to evaluate the reliability of the LLM-based AES method, while the remaining 686 essays were designated as development data to assess the LLM-based AES’s capability to distinguish participants with varying proficiency levels. The GPT 4 API was used in this study. A detailed explanation of the prompt-assessment criteria is provided in Section Prompt . All essays were sent to the model for measurement and scoring.

Measures of writing proficiency for nonnative Japanese

Japanese exhibits a morphologically agglutinative structure where morphemes are attached to the word stem to convey grammatical functions such as tense, aspect, voice, and honorifics, e.g. (5).

食べ-させ-られ-まし-た-か

tabe-sase-rare-mashi-ta-ka

[eat (stem)-causative-passive voice-honorification-tense. past-question marker]

Japanese employs nine case particles to indicate grammatical functions: the nominative case particle が (ga), the accusative case particle を (o), the genitive case particle の (no), the dative case particle に (ni), the locative/instrumental case particle で (de), the ablative case particle から (kara), the directional case particle へ (e), and the comitative case particle と (to). The agglutinative nature of the language, combined with the case particle system, provides an efficient means of distinguishing between active and passive voice, either through morphemes or case particles, e.g. 食べる taberu “eat concusive . ” (active voice); 食べられる taberareru “eat concusive . ” (passive voice). In the active voice, “パン を 食べる” (pan o taberu) translates to “to eat bread”. On the other hand, in the passive voice, it becomes “パン が 食べられた” (pan ga taberareta), which means “(the) bread was eaten”. Additionally, it is important to note that different conjugations of the same lemma are considered as one type in order to ensure a comprehensive assessment of the language features. For example, e.g., 食べる taberu “eat concusive . ”; 食べている tabeteiru “eat progress .”; 食べた tabeta “eat past . ” as one type.

To incorporate these features, previous research (Suzuki, 1999 ; Watanabe et al. 1988 ; Ishioka, 2001 ; Ishioka and Kameda, 2006 ; Hirao et al. 2020 ) has identified complexity, fluency, and accuracy as crucial factors for evaluating writing quality. These criteria are assessed through various aspects, including lexical richness (lexical density, diversity, and sophistication), syntactic complexity, and cohesion (Kyle et al. 2021 ; Mizumoto and Eguchi, 2023 ; Ure, 1971 ; Halliday, 1985 ; Barkaoui and Hadidi, 2020 ; Zenker and Kyle, 2021 ; Kim et al. 2018 ; Lu, 2017 ; Ortega, 2015 ). Therefore, this study proposes five scoring categories: lexical richness, syntactic complexity, cohesion, content elaboration, and grammatical accuracy. A total of 16 measures were employed to capture these categories. The calculation process and specific details of these measures can be found in Table 2 .

T-unit, first introduced by Hunt ( 1966 ), is a measure used for evaluating speech and composition. It serves as an indicator of syntactic development and represents the shortest units into which a piece of discourse can be divided without leaving any sentence fragments. In the context of Japanese language assessment, Sakoda and Hosoi ( 2020 ) utilized T-unit as the basic unit to assess the accuracy and complexity of Japanese learners’ speaking and storytelling. The calculation of T-units in Japanese follows the following principles:

A single main clause constitutes 1 T-unit, regardless of the presence or absence of dependent clauses, e.g. (6).

ケンとマリはピクニックに行きました (main clause): 1 T-unit.

If a sentence contains a main clause along with subclauses, each subclause is considered part of the same T-unit, e.g. (7).

天気が良かった の で (subclause)、ケンとマリはピクニックに行きました (main clause): 1 T-unit.

In the case of coordinate clauses, where multiple clauses are connected, each coordinated clause is counted separately. Thus, a sentence with coordinate clauses may have 2 T-units or more, e.g. (8).

ケンは地図で場所を探して (coordinate clause)、マリはサンドイッチを作りました (coordinate clause): 2 T-units.

Lexical diversity refers to the range of words used within a text (Engber, 1995 ; Kyle et al. 2021 ) and is considered a useful measure of the breadth of vocabulary in L n production (Jarvis, 2013a , 2013b ).

The type/token ratio (TTR) is widely recognized as a straightforward measure for calculating lexical diversity and has been employed in numerous studies. These studies have demonstrated a strong correlation between TTR and other methods of measuring lexical diversity (e.g., Bentz et al. 2016 ; Čech and Miroslav, 2018 ; Çöltekin and Taraka, 2018 ). TTR is computed by considering both the number of unique words (types) and the total number of words (tokens) in a given text. Given that the length of learners’ writing texts can vary, this study employs the moving average type-token ratio (MATTR) to mitigate the influence of text length. MATTR is calculated using a 50-word moving window. Initially, a TTR is determined for words 1–50 in an essay, followed by words 2–51, 3–52, and so on until the end of the essay is reached (Díez-Ortega and Kyle, 2023 ). The final MATTR scores were obtained by averaging the TTR scores for all 50-word windows. The following formula was employed to derive MATTR:

\({\rm{MATTR}}({\rm{W}})=\frac{{\sum }_{{\rm{i}}=1}^{{\rm{N}}-{\rm{W}}+1}{{\rm{F}}}_{{\rm{i}}}}{{\rm{W}}({\rm{N}}-{\rm{W}}+1)}\)

Here, N refers to the number of tokens in the corpus. W is the randomly selected token size (W < N). \({F}_{i}\) is the number of types in each window. The \({\rm{MATTR}}({\rm{W}})\) is the mean of a series of type-token ratios (TTRs) based on the word form for all windows. It is expected that individuals with higher language proficiency will produce texts with greater lexical diversity, as indicated by higher MATTR scores.

Lexical density was captured by the ratio of the number of lexical words to the total number of words (Lu, 2012 ). Lexical sophistication refers to the utilization of advanced vocabulary, often evaluated through word frequency indices (Crossley et al. 2013 ; Haberman, 2008 ; Kyle and Crossley, 2015 ; Laufer and Nation, 1995 ; Lu, 2012 ; Read, 2000 ). In line of writing, lexical sophistication can be interpreted as vocabulary breadth, which entails the appropriate usage of vocabulary items across various lexicon-grammatical contexts and registers (Garner et al. 2019 ; Kim et al. 2018 ; Kyle et al. 2018 ). In Japanese specifically, words are considered lexically sophisticated if they are not included in the “Japanese Education Vocabulary List Ver 1.0”. Footnote 4 Consequently, lexical sophistication was calculated by determining the number of sophisticated word types relative to the total number of words per essay. Furthermore, it has been suggested that, in Japanese writing, sentences should ideally have a length of no more than 40 to 50 characters, as this promotes readability. Therefore, the median and maximum sentence length can be considered as useful indices for assessment (Ishioka and Kameda, 2006 ).

Syntactic complexity was assessed based on several measures, including the mean length of clauses, verb phrases per T-unit, clauses per T-unit, dependent clauses per T-unit, complex nominals per clause, adverbial clauses per clause, coordinate phrases per clause, and mean dependency distance (MDD). The MDD reflects the distance between the governor and dependent positions in a sentence. A larger dependency distance indicates a higher cognitive load and greater complexity in syntactic processing (Liu, 2008 ; Liu et al. 2017 ). The MDD has been established as an efficient metric for measuring syntactic complexity (Jiang, Quyang, and Liu, 2019 ; Li and Yan, 2021 ). To calculate the MDD, the position numbers of the governor and dependent are subtracted, assuming that words in a sentence are assigned in a linear order, such as W1 … Wi … Wn. In any dependency relationship between words Wa and Wb, Wa is the governor and Wb is the dependent. The MDD of the entire sentence was obtained by taking the absolute value of governor – dependent:

MDD = \(\frac{1}{n}{\sum }_{i=1}^{n}|{\rm{D}}{{\rm{D}}}_{i}|\)

In this formula, \(n\) represents the number of words in the sentence, and \({DD}i\) is the dependency distance of the \({i}^{{th}}\) dependency relationship of a sentence. Building on this, the annotation of sentence ‘Mary-ga-John-ni-keshigomu-o-watashita was [Mary- top -John- dat -eraser- acc -give- past] ’. The sentence’s MDD would be 2. Table 3 provides the CSV file as a prompt for GPT 4.

Cohesion (semantic similarity) and content elaboration aim to capture the ideas presented in test taker’s essays. Cohesion was assessed using three measures: Synonym overlap/paragraph (topic), Synonym overlap/paragraph (keywords), and word2vec cosine similarity. Content elaboration and development were measured as the number of metadiscourse markers (type)/number of words. To capture content closely, this study proposed a novel-distance based representation, by encoding the cosine distance between the essay (by learner) and essay task’s (topic and keyword) i -vectors. The learner’s essay is decoded into a word sequence, and aligned to the essay task’ topic and keyword for log-likelihood measurement. The cosine distance reveals the content elaboration score in the leaners’ essay. The mathematical equation of cosine similarity between target-reference vectors is shown in (11), assuming there are i essays and ( L i , …. L n ) and ( N i , …. N n ) are the vectors representing the learner and task’s topic and keyword respectively. The content elaboration distance between L i and N i was calculated as follows:

\(\cos \left(\theta \right)=\frac{{\rm{L}}\,\cdot\, {\rm{N}}}{\left|{\rm{L}}\right|{\rm{|N|}}}=\frac{\mathop{\sum }\nolimits_{i=1}^{n}{L}_{i}{N}_{i}}{\sqrt{\mathop{\sum }\nolimits_{i=1}^{n}{L}_{i}^{2}}\sqrt{\mathop{\sum }\nolimits_{i=1}^{n}{N}_{i}^{2}}}\)

A high similarity value indicates a low difference between the two recognition outcomes, which in turn suggests a high level of proficiency in content elaboration.

To evaluate the effectiveness of the proposed measures in distinguishing different proficiency levels among nonnative Japanese speakers’ writing, we conducted a multi-faceted Rasch measurement analysis (Linacre, 1994 ). This approach applies measurement models to thoroughly analyze various factors that can influence test outcomes, including test takers’ proficiency, item difficulty, and rater severity, among others. The underlying principles and functionality of multi-faceted Rasch measurement are illustrated in (12).

\(\log \left(\frac{{P}_{{nijk}}}{{P}_{{nij}(k-1)}}\right)={B}_{n}-{D}_{i}-{C}_{j}-{F}_{k}\)

(12) defines the logarithmic transformation of the probability ratio ( P nijk /P nij(k-1) )) as a function of multiple parameters. Here, n represents the test taker, i denotes a writing proficiency measure, j corresponds to the human rater, and k represents the proficiency score. The parameter B n signifies the proficiency level of test taker n (where n ranges from 1 to N). D j represents the difficulty parameter of test item i (where i ranges from 1 to L), while C j represents the severity of rater j (where j ranges from 1 to J). Additionally, F k represents the step difficulty for a test taker to move from score ‘k-1’ to k . P nijk refers to the probability of rater j assigning score k to test taker n for test item i . P nij(k-1) represents the likelihood of test taker n being assigned score ‘k-1’ by rater j for test item i . Each facet within the test is treated as an independent parameter and estimated within the same reference framework. To evaluate the consistency of scores obtained through both human and computer analysis, we utilized the Infit mean-square statistic. This statistic is a chi-square measure divided by the degrees of freedom and is weighted with information. It demonstrates higher sensitivity to unexpected patterns in responses to items near a person’s proficiency level (Linacre, 2002 ). Fit statistics are assessed based on predefined thresholds for acceptable fit. For the Infit MNSQ, which has a mean of 1.00, different thresholds have been suggested. Some propose stricter thresholds ranging from 0.7 to 1.3 (Bond et al. 2021 ), while others suggest more lenient thresholds ranging from 0.5 to 1.5 (Eckes, 2009 ). In this study, we adopted the criterion of 0.70–1.30 for the Infit MNSQ.

Moving forward, we can now proceed to assess the effectiveness of the 16 proposed measures based on five criteria for accurately distinguishing various levels of writing proficiency among non-native Japanese speakers. To conduct this evaluation, we utilized the development dataset from the I-JAS corpus, as described in Section Dataset . Table 4 provides a measurement report that presents the performance details of the 14 metrics under consideration. The measure separation was found to be 4.02, indicating a clear differentiation among the measures. The reliability index for the measure separation was 0.891, suggesting consistency in the measurement. Similarly, the person separation reliability index was 0.802, indicating the accuracy of the assessment in distinguishing between individuals. All 16 measures demonstrated Infit mean squares within a reasonable range, ranging from 0.76 to 1.28. The Synonym overlap/paragraph (topic) measure exhibited a relatively high outfit mean square of 1.46, although the Infit mean square falls within an acceptable range. The standard error for the measures ranged from 0.13 to 0.28, indicating the precision of the estimates.

Table 5 further illustrated the weights assigned to different linguistic measures for score prediction, with higher weights indicating stronger correlations between those measures and higher scores. Specifically, the following measures exhibited higher weights compared to others: moving average type token ratio per essay has a weight of 0.0391. Mean dependency distance had a weight of 0.0388. Mean length of clause, calculated by dividing the number of words by the number of clauses, had a weight of 0.0374. Complex nominals per T-unit, calculated by dividing the number of complex nominals by the number of T-units, had a weight of 0.0379. Coordinate phrases rate, calculated by dividing the number of coordinate phrases by the number of clauses, had a weight of 0.0325. Grammatical error rate, representing the number of errors per essay, had a weight of 0.0322.

Criteria (output indicator)

The criteria used to evaluate the writing ability in this study were based on CEFR, which follows a six-point scale ranging from A1 to C2. To assess the quality of Japanese writing, the scoring criteria from Table 6 were utilized. These criteria were derived from the IELTS writing standards and served as assessment guidelines and prompts for the written output.

A prompt is a question or detailed instruction that is provided to the model to obtain a proper response. After several pilot experiments, we decided to provide the measures (Section Measures of writing proficiency for nonnative Japanese ) as the input prompt and use the criteria (Section Criteria (output indicator) ) as the output indicator. Regarding the prompt language, considering that the LLM was tasked with rating Japanese essays, would prompt in Japanese works better Footnote 5 ? We conducted experiments comparing the performance of GPT-4 using both English and Japanese prompts. Additionally, we utilized the Japanese local model OCLL with Japanese prompts. Multiple trials were conducted using the same sample. Regardless of the prompt language used, we consistently obtained the same grading results with GPT-4, which assigned a grade of B1 to the writing sample. This suggested that GPT-4 is reliable and capable of producing consistent ratings regardless of the prompt language. On the other hand, when we used Japanese prompts with the Japanese local model “OCLL”, we encountered inconsistent grading results. Out of 10 attempts with OCLL, only 6 yielded consistent grading results (B1), while the remaining 4 showed different outcomes, including A1 and B2 grades. These findings indicated that the language of the prompt was not the determining factor for reliable AES. Instead, the size of the training data and the model parameters played crucial roles in achieving consistent and reliable AES results for the language model.

The following is the utilized prompt, which details all measures and requires the LLM to score the essays using holistic and trait scores.

Please evaluate Japanese essays written by Japanese learners and assign a score to each essay on a six-point scale, ranging from A1, A2, B1, B2, C1 to C2. Additionally, please provide trait scores and display the calculation process for each trait score. The scoring should be based on the following criteria:

Moving average type-token ratio.

Number of lexical words (token) divided by the total number of words per essay.

Number of sophisticated word types divided by the total number of words per essay.

Mean length of clause.

Verb phrases per T-unit.

Clauses per T-unit.

Dependent clauses per T-unit.

Complex nominals per clause.

Adverbial clauses per clause.

Coordinate phrases per clause.

Mean dependency distance.

Synonym overlap paragraph (topic and keywords).

Word2vec cosine similarity.

Connectives per essay.

Conjunctions per essay.

Number of metadiscourse markers (types) divided by the total number of words.

Number of errors per essay.

Japanese essay text

出かける前に二人が地図を見ている間に、サンドイッチを入れたバスケットに犬が入ってしまいました。それに気づかずに二人は楽しそうに出かけて行きました。やがて突然犬がバスケットから飛び出し、二人は驚きました。バスケット の 中を見ると、食べ物はすべて犬に食べられていて、二人は困ってしまいました。(ID_JJJ01_SW1)

The score of the example above was B1. Figure 3 provides an example of holistic and trait scores provided by GPT-4 (with a prompt indicating all measures) via Bing Footnote 6 .

figure 3

Example of GPT-4 AES and feedback (with a prompt indicating all measures).

Statistical analysis

The aim of this study is to investigate the potential use of LLM for nonnative Japanese AES. It seeks to compare the scoring outcomes obtained from feature-based AES tools, which rely on conventional machine learning technology (i.e. Jess, JWriter), with those generated by AI-driven AES tools utilizing deep learning technology (BERT, GPT, OCLL). To assess the reliability of a computer-assisted annotation tool, the study initially established human-human agreement as the benchmark measure. Subsequently, the performance of the LLM-based method was evaluated by comparing it to human-human agreement.

To assess annotation agreement, the study employed standard measures such as precision, recall, and F-score (Brants 2000 ; Lu 2010 ), along with the quadratically weighted kappa (QWK) to evaluate the consistency and agreement in the annotation process. Assume A and B represent human annotators. When comparing the annotations of the two annotators, the following results are obtained. The evaluation of precision, recall, and F-score metrics was illustrated in equations (13) to (15).

\({\rm{Recall}}(A,B)=\frac{{\rm{Number}}\,{\rm{of}}\,{\rm{identical}}\,{\rm{nodes}}\,{\rm{in}}\,A\,{\rm{and}}\,B}{{\rm{Number}}\,{\rm{of}}\,{\rm{nodes}}\,{\rm{in}}\,A}\)

\({\rm{Precision}}(A,\,B)=\frac{{\rm{Number}}\,{\rm{of}}\,{\rm{identical}}\,{\rm{nodes}}\,{\rm{in}}\,A\,{\rm{and}}\,B}{{\rm{Number}}\,{\rm{of}}\,{\rm{nodes}}\,{\rm{in}}\,B}\)

The F-score is the harmonic mean of recall and precision:

\({\rm{F}}-{\rm{score}}=\frac{2* ({\rm{Precision}}* {\rm{Recall}})}{{\rm{Precision}}+{\rm{Recall}}}\)

The highest possible value of an F-score is 1.0, indicating perfect precision and recall, and the lowest possible value is 0, if either precision or recall are zero.

In accordance with Taghipour and Ng ( 2016 ), the calculation of QWK involves two steps:

Step 1: Construct a weight matrix W as follows:

\({W}_{{ij}}=\frac{{(i-j)}^{2}}{{(N-1)}^{2}}\)

i represents the annotation made by the tool, while j represents the annotation made by a human rater. N denotes the total number of possible annotations. Matrix O is subsequently computed, where O_( i, j ) represents the count of data annotated by the tool ( i ) and the human annotator ( j ). On the other hand, E refers to the expected count matrix, which undergoes normalization to ensure that the sum of elements in E matches the sum of elements in O.

Step 2: With matrices O and E, the QWK is obtained as follows:

K = 1- \(\frac{\sum i,j{W}_{i,j}\,{O}_{i,j}}{\sum i,j{W}_{i,j}\,{E}_{i,j}}\)

The value of the quadratic weighted kappa increases as the level of agreement improves. Further, to assess the accuracy of LLM scoring, the proportional reductive mean square error (PRMSE) was employed. The PRMSE approach takes into account the variability observed in human ratings to estimate the rater error, which is then subtracted from the variance of the human labels. This calculation provides an overall measure of agreement between the automated scores and true scores (Haberman et al. 2015 ; Loukina et al. 2020 ; Taghipour and Ng, 2016 ). The computation of PRMSE involves the following steps:

Step 1: Calculate the mean squared errors (MSEs) for the scoring outcomes of the computer-assisted tool (MSE tool) and the human scoring outcomes (MSE human).

Step 2: Determine the PRMSE by comparing the MSE of the computer-assisted tool (MSE tool) with the MSE from human raters (MSE human), using the following formula:

\({\rm{PRMSE}}=1-\frac{({\rm{MSE}}\,{\rm{tool}})\,}{({\rm{MSE}}\,{\rm{human}})\,}=1-\,\frac{{\sum }_{i}^{n}=1{({{\rm{y}}}_{i}-{\hat{{\rm{y}}}}_{{\rm{i}}})}^{2}}{{\sum }_{i}^{n}=1{({{\rm{y}}}_{i}-\hat{{\rm{y}}})}^{2}}\)

In the numerator, ŷi represents the scoring outcome predicted by a specific LLM-driven AES system for a given sample. The term y i − ŷ i represents the difference between this predicted outcome and the mean value of all LLM-driven AES systems’ scoring outcomes. It quantifies the deviation of the specific LLM-driven AES system’s prediction from the average prediction of all LLM-driven AES systems. In the denominator, y i − ŷ represents the difference between the scoring outcome provided by a specific human rater for a given sample and the mean value of all human raters’ scoring outcomes. It measures the discrepancy between the specific human rater’s score and the average score given by all human raters. The PRMSE is then calculated by subtracting the ratio of the MSE tool to the MSE human from 1. PRMSE falls within the range of 0 to 1, with larger values indicating reduced errors in LLM’s scoring compared to those of human raters. In other words, a higher PRMSE implies that LLM’s scoring demonstrates greater accuracy in predicting the true scores (Loukina et al. 2020 ). The interpretation of kappa values, ranging from 0 to 1, is based on the work of Landis and Koch ( 1977 ). Specifically, the following categories are assigned to different ranges of kappa values: −1 indicates complete inconsistency, 0 indicates random agreement, 0.0 ~ 0.20 indicates extremely low level of agreement (slight), 0.21 ~ 0.40 indicates moderate level of agreement (fair), 0.41 ~ 0.60 indicates medium level of agreement (moderate), 0.61 ~ 0.80 indicates high level of agreement (substantial), 0.81 ~ 1 indicates almost perfect level of agreement. All statistical analyses were executed using Python script.

Results and discussion

Annotation reliability of the llm.

This section focuses on assessing the reliability of the LLM’s annotation and scoring capabilities. To evaluate the reliability, several tests were conducted simultaneously, aiming to achieve the following objectives:

Assess the LLM’s ability to differentiate between test takers with varying levels of oral proficiency.

Determine the level of agreement between the annotations and scoring performed by the LLM and those done by human raters.

The evaluation of the results encompassed several metrics, including: precision, recall, F-Score, quadratically-weighted kappa, proportional reduction of mean squared error, Pearson correlation, and multi-faceted Rasch measurement.

Inter-annotator agreement (human–human annotator agreement)

We started with an agreement test of the two human annotators. Two trained annotators were recruited to determine the writing task data measures. A total of 714 scripts, as the test data, was utilized. Each analysis lasted 300–360 min. Inter-annotator agreement was evaluated using the standard measures of precision, recall, and F-score and QWK. Table 7 presents the inter-annotator agreement for the various indicators. As shown, the inter-annotator agreement was fairly high, with F-scores ranging from 1.0 for sentence and word number to 0.666 for grammatical errors.

The findings from the QWK analysis provided further confirmation of the inter-annotator agreement. The QWK values covered a range from 0.950 ( p  = 0.000) for sentence and word number to 0.695 for synonym overlap number (keyword) and grammatical errors ( p  = 0.001).

Agreement of annotation outcomes between human and LLM

To evaluate the consistency between human annotators and LLM annotators (BERT, GPT, OCLL) across the indices, the same test was conducted. The results of the inter-annotator agreement (F-score) between LLM and human annotation are provided in Appendix B-D. The F-scores ranged from 0.706 for Grammatical error # for OCLL-human to a perfect 1.000 for GPT-human, for sentences, clauses, T-units, and words. These findings were further supported by the QWK analysis, which showed agreement levels ranging from 0.807 ( p  = 0.001) for metadiscourse markers for OCLL-human to 0.962 for words ( p  = 0.000) for GPT-human. The findings demonstrated that the LLM annotation achieved a significant level of accuracy in identifying measurement units and counts.

Reliability of LLM-driven AES’s scoring and discriminating proficiency levels

This section examines the reliability of the LLM-driven AES scoring through a comparison of the scoring outcomes produced by human raters and the LLM ( Reliability of LLM-driven AES scoring ). It also assesses the effectiveness of the LLM-based AES system in differentiating participants with varying proficiency levels ( Reliability of LLM-driven AES discriminating proficiency levels ).

Reliability of LLM-driven AES scoring

Table 8 summarizes the QWK coefficient analysis between the scores computed by the human raters and the GPT-4 for the individual essays from I-JAS Footnote 7 . As shown, the QWK of all measures ranged from k  = 0.819 for lexical density (number of lexical words (tokens)/number of words per essay) to k  = 0.644 for word2vec cosine similarity. Table 9 further presents the Pearson correlations between the 16 writing proficiency measures scored by human raters and GPT 4 for the individual essays. The correlations ranged from 0.672 for syntactic complexity to 0.734 for grammatical accuracy. The correlations between the writing proficiency scores assigned by human raters and the BERT-based AES system were found to range from 0.661 for syntactic complexity to 0.713 for grammatical accuracy. The correlations between the writing proficiency scores given by human raters and the OCLL-based AES system ranged from 0.654 for cohesion to 0.721 for grammatical accuracy. These findings indicated an alignment between the assessments made by human raters and both the BERT-based and OCLL-based AES systems in terms of various aspects of writing proficiency.

Reliability of LLM-driven AES discriminating proficiency levels

After validating the reliability of the LLM’s annotation and scoring, the subsequent objective was to evaluate its ability to distinguish between various proficiency levels. For this analysis, a dataset of 686 individual essays was utilized. Table 10 presents a sample of the results, summarizing the means, standard deviations, and the outcomes of the one-way ANOVAs based on the measures assessed by the GPT-4 model. A post hoc multiple comparison test, specifically the Bonferroni test, was conducted to identify any potential differences between pairs of levels.

As the results reveal, seven measures presented linear upward or downward progress across the three proficiency levels. These were marked in bold in Table 10 and comprise one measure of lexical richness, i.e. MATTR (lexical diversity); four measures of syntactic complexity, i.e. MDD (mean dependency distance), MLC (mean length of clause), CNT (complex nominals per T-unit), CPC (coordinate phrases rate); one cohesion measure, i.e. word2vec cosine similarity and GER (grammatical error rate). Regarding the ability of the sixteen measures to distinguish adjacent proficiency levels, the Bonferroni tests indicated that statistically significant differences exist between the primary level and the intermediate level for MLC and GER. One measure of lexical richness, namely LD, along with three measures of syntactic complexity (VPT, CT, DCT, ACC), two measures of cohesion (SOPT, SOPK), and one measure of content elaboration (IMM), exhibited statistically significant differences between proficiency levels. However, these differences did not demonstrate a linear progression between adjacent proficiency levels. No significant difference was observed in lexical sophistication between proficiency levels.

To summarize, our study aimed to evaluate the reliability and differentiation capabilities of the LLM-driven AES method. For the first objective, we assessed the LLM’s ability to differentiate between test takers with varying levels of oral proficiency using precision, recall, F-Score, and quadratically-weighted kappa. Regarding the second objective, we compared the scoring outcomes generated by human raters and the LLM to determine the level of agreement. We employed quadratically-weighted kappa and Pearson correlations to compare the 16 writing proficiency measures for the individual essays. The results confirmed the feasibility of using the LLM for annotation and scoring in AES for nonnative Japanese. As a result, Research Question 1 has been addressed.

Comparison of BERT-, GPT-, OCLL-based AES, and linguistic-feature-based computation methods

This section aims to compare the effectiveness of five AES methods for nonnative Japanese writing, i.e. LLM-driven approaches utilizing BERT, GPT, and OCLL, linguistic feature-based approaches using Jess and JWriter. The comparison was conducted by comparing the ratings obtained from each approach with human ratings. All ratings were derived from the dataset introduced in Dataset . To facilitate the comparison, the agreement between the automated methods and human ratings was assessed using QWK and PRMSE. The performance of each approach was summarized in Table 11 .

The QWK coefficient values indicate that LLMs (GPT, BERT, OCLL) and human rating outcomes demonstrated higher agreement compared to feature-based AES methods (Jess and JWriter) in assessing writing proficiency criteria, including lexical richness, syntactic complexity, content, and grammatical accuracy. Among the LLMs, the GPT-4 driven AES and human rating outcomes showed the highest agreement in all criteria, except for syntactic complexity. The PRMSE values suggest that the GPT-based method outperformed linguistic feature-based methods and other LLM-based approaches. Moreover, an interesting finding emerged during the study: the agreement coefficient between GPT-4 and human scoring was even higher than the agreement between different human raters themselves. This discovery highlights the advantage of GPT-based AES over human rating. Ratings involve a series of processes, including reading the learners’ writing, evaluating the content and language, and assigning scores. Within this chain of processes, various biases can be introduced, stemming from factors such as rater biases, test design, and rating scales. These biases can impact the consistency and objectivity of human ratings. GPT-based AES may benefit from its ability to apply consistent and objective evaluation criteria. By prompting the GPT model with detailed writing scoring rubrics and linguistic features, potential biases in human ratings can be mitigated. The model follows a predefined set of guidelines and does not possess the same subjective biases that human raters may exhibit. This standardization in the evaluation process contributes to the higher agreement observed between GPT-4 and human scoring. Section Prompt strategy of the study delves further into the role of prompts in the application of LLMs to AES. It explores how the choice and implementation of prompts can impact the performance and reliability of LLM-based AES methods. Furthermore, it is important to acknowledge the strengths of the local model, i.e. the Japanese local model OCLL, which excels in processing certain idiomatic expressions. Nevertheless, our analysis indicated that GPT-4 surpasses local models in AES. This superior performance can be attributed to the larger parameter size of GPT-4, estimated to be between 500 billion and 1 trillion, which exceeds the sizes of both BERT and the local model OCLL.

Prompt strategy

In the context of prompt strategy, Mizumoto and Eguchi ( 2023 ) conducted a study where they applied the GPT-3 model to automatically score English essays in the TOEFL test. They found that the accuracy of the GPT model alone was moderate to fair. However, when they incorporated linguistic measures such as cohesion, syntactic complexity, and lexical features alongside the GPT model, the accuracy significantly improved. This highlights the importance of prompt engineering and providing the model with specific instructions to enhance its performance. In this study, a similar approach was taken to optimize the performance of LLMs. GPT-4, which outperformed BERT and OCLL, was selected as the candidate model. Model 1 was used as the baseline, representing GPT-4 without any additional prompting. Model 2, on the other hand, involved GPT-4 prompted with 16 measures that included scoring criteria, efficient linguistic features for writing assessment, and detailed measurement units and calculation formulas. The remaining models (Models 3 to 18) utilized GPT-4 prompted with individual measures. The performance of these 18 different models was assessed using the output indicators described in Section Criteria (output indicator) . By comparing the performances of these models, the study aimed to understand the impact of prompt engineering on the accuracy and effectiveness of GPT-4 in AES tasks.

Based on the PRMSE scores presented in Fig. 4 , it was observed that Model 1, representing GPT-4 without any additional prompting, achieved a fair level of performance. However, Model 2, which utilized GPT-4 prompted with all measures, outperformed all other models in terms of PRMSE score, achieving a score of 0.681. These results indicate that the inclusion of specific measures and prompts significantly enhanced the performance of GPT-4 in AES. Among the measures, syntactic complexity was found to play a particularly significant role in improving the accuracy of GPT-4 in assessing writing quality. Following that, lexical diversity emerged as another important factor contributing to the model’s effectiveness. The study suggests that a well-prompted GPT-4 can serve as a valuable tool to support human assessors in evaluating writing quality. By utilizing GPT-4 as an automated scoring tool, the evaluation biases associated with human raters can be minimized. This has the potential to empower teachers by allowing them to focus on designing writing tasks and guiding writing strategies, while leveraging the capabilities of GPT-4 for efficient and reliable scoring.

figure 4

PRMSE scores of the 18 AES models.

This study aimed to investigate two main research questions: the feasibility of utilizing LLMs for AES and the impact of prompt engineering on the application of LLMs in AES.

To address the first objective, the study compared the effectiveness of five different models: GPT, BERT, the Japanese local LLM (OCLL), and two conventional machine learning-based AES tools (Jess and JWriter). The PRMSE values indicated that the GPT-4-based method outperformed other LLMs (BERT, OCLL) and linguistic feature-based computational methods (Jess and JWriter) across various writing proficiency criteria. Furthermore, the agreement coefficient between GPT-4 and human scoring surpassed the agreement among human raters themselves, highlighting the potential of using the GPT-4 tool to enhance AES by reducing biases and subjectivity, saving time, labor, and cost, and providing valuable feedback for self-study. Regarding the second goal, the role of prompt design was investigated by comparing 18 models, including a baseline model, a model prompted with all measures, and 16 models prompted with one measure at a time. GPT-4, which outperformed BERT and OCLL, was selected as the candidate model. The PRMSE scores of the models showed that GPT-4 prompted with all measures achieved the best performance, surpassing the baseline and other models.

In conclusion, this study has demonstrated the potential of LLMs in supporting human rating in assessments. By incorporating automation, we can save time and resources while reducing biases and subjectivity inherent in human rating processes. Automated language assessments offer the advantage of accessibility, providing equal opportunities and economic feasibility for individuals who lack access to traditional assessment centers or necessary resources. LLM-based language assessments provide valuable feedback and support to learners, aiding in the enhancement of their language proficiency and the achievement of their goals. This personalized feedback can cater to individual learner needs, facilitating a more tailored and effective language-learning experience.

There are three important areas that merit further exploration. First, prompt engineering requires attention to ensure optimal performance of LLM-based AES across different language types. This study revealed that GPT-4, when prompted with all measures, outperformed models prompted with fewer measures. Therefore, investigating and refining prompt strategies can enhance the effectiveness of LLMs in automated language assessments. Second, it is crucial to explore the application of LLMs in second-language assessment and learning for oral proficiency, as well as their potential in under-resourced languages. Recent advancements in self-supervised machine learning techniques have significantly improved automatic speech recognition (ASR) systems, opening up new possibilities for creating reliable ASR systems, particularly for under-resourced languages with limited data. However, challenges persist in the field of ASR. First, ASR assumes correct word pronunciation for automatic pronunciation evaluation, which proves challenging for learners in the early stages of language acquisition due to diverse accents influenced by their native languages. Accurately segmenting short words becomes problematic in such cases. Second, developing precise audio-text transcriptions for languages with non-native accented speech poses a formidable task. Last, assessing oral proficiency levels involves capturing various linguistic features, including fluency, pronunciation, accuracy, and complexity, which are not easily captured by current NLP technology.

Data availability

The dataset utilized was obtained from the International Corpus of Japanese as a Second Language (I-JAS). The data URLs: [ https://www2.ninjal.ac.jp/jll/lsaj/ihome2.html ].

J-CAT and TTBJ are two computerized adaptive tests used to assess Japanese language proficiency.

SPOT is a specific component of the TTBJ test.

J-CAT: https://www.j-cat2.org/html/ja/pages/interpret.html

SPOT: https://ttbj.cegloc.tsukuba.ac.jp/p1.html#SPOT .

The study utilized a prompt-based GPT-4 model, developed by OpenAI, which has an impressive architecture with 1.8 trillion parameters across 120 layers. GPT-4 was trained on a vast dataset of 13 trillion tokens, using two stages: initial training on internet text datasets to predict the next token, and subsequent fine-tuning through reinforcement learning from human feedback.

https://www2.ninjal.ac.jp/jll/lsaj/ihome2-en.html .

http://jhlee.sakura.ne.jp/JEV/ by Japanese Learning Dictionary Support Group 2015.

We express our sincere gratitude to the reviewer for bringing this matter to our attention.

On February 7, 2023, Microsoft began rolling out a major overhaul to Bing that included a new chatbot feature based on OpenAI’s GPT-4 (Bing.com).

Appendix E-F present the analysis results of the QWK coefficient between the scores computed by the human raters and the BERT, OCLL models.

Attali Y, Burstein J (2006) Automated essay scoring with e-rater® V.2. J. Technol., Learn. Assess., 4

Barkaoui K, Hadidi A (2020) Assessing Change in English Second Language Writing Performance (1st ed.). Routledge, New York. https://doi.org/10.4324/9781003092346

Bentz C, Tatyana R, Koplenig A, Tanja S (2016) A comparison between morphological complexity. measures: Typological data vs. language corpora. In Proceedings of the workshop on computational linguistics for linguistic complexity (CL4LC), 142–153. Osaka, Japan: The COLING 2016 Organizing Committee

Bond TG, Yan Z, Heene M (2021) Applying the Rasch model: Fundamental measurement in the human sciences (4th ed). Routledge

Brants T (2000) Inter-annotator agreement for a German newspaper corpus. Proceedings of the Second International Conference on Language Resources and Evaluation (LREC’00), Athens, Greece, 31 May-2 June, European Language Resources Association

Brown TB, Mann B, Ryder N, et al. (2020) Language models are few-shot learners. Advances in Neural Information Processing Systems, Online, 6–12 December, Curran Associates, Inc., Red Hook, NY

Burstein J (2003) The E-rater scoring engine: Automated essay scoring with natural language processing. In Shermis MD and Burstein JC (ed) Automated Essay Scoring: A Cross-Disciplinary Perspective. Lawrence Erlbaum Associates, Mahwah, NJ

Čech R, Miroslav K (2018) Morphological richness of text. In Masako F, Václav C (ed) Taming the corpus: From inflection and lexis to interpretation, 63–77. Cham, Switzerland: Springer Nature

Çöltekin Ç, Taraka, R (2018) Exploiting Universal Dependencies treebanks for measuring morphosyntactic complexity. In Aleksandrs B, Christian B (ed), Proceedings of first workshop on measuring language complexity, 1–7. Torun, Poland

Crossley SA, Cobb T, McNamara DS (2013) Comparing count-based and band-based indices of word frequency: Implications for active vocabulary research and pedagogical applications. System 41:965–981. https://doi.org/10.1016/j.system.2013.08.002

Article   Google Scholar  

Crossley SA, McNamara DS (2016) Say more and be more coherent: How text elaboration and cohesion can increase writing quality. J. Writ. Res. 7:351–370

CyberAgent Inc (2023) Open-Calm series of Japanese language models. Retrieved from: https://www.cyberagent.co.jp/news/detail/id=28817

Devlin J, Chang MW, Lee K, Toutanova K (2019) BERT: Pre-training of deep bidirectional transformers for language understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, Minneapolis, Minnesota, 2–7 June, pp. 4171–4186. Association for Computational Linguistics

Diez-Ortega M, Kyle K (2023) Measuring the development of lexical richness of L2 Spanish: a longitudinal learner corpus study. Studies in Second Language Acquisition 1-31

Eckes T (2009) On common ground? How raters perceive scoring criteria in oral proficiency testing. In Brown A, Hill K (ed) Language testing and evaluation 13: Tasks and criteria in performance assessment (pp. 43–73). Peter Lang Publishing

Elliot S (2003) IntelliMetric: from here to validity. In: Shermis MD, Burstein JC (ed) Automated Essay Scoring: A Cross-Disciplinary Perspective. Lawrence Erlbaum Associates, Mahwah, NJ

Google Scholar  

Engber CA (1995) The relationship of lexical proficiency to the quality of ESL compositions. J. Second Lang. Writ. 4:139–155

Garner J, Crossley SA, Kyle K (2019) N-gram measures and L2 writing proficiency. System 80:176–187. https://doi.org/10.1016/j.system.2018.12.001

Haberman SJ (2008) When can subscores have value? J. Educat. Behav. Stat., 33:204–229

Haberman SJ, Yao L, Sinharay S (2015) Prediction of true test scores from observed item scores and ancillary data. Brit. J. Math. Stat. Psychol. 68:363–385

Halliday MAK (1985) Spoken and Written Language. Deakin University Press, Melbourne, Australia

Hirao R, Arai M, Shimanaka H et al. (2020) Automated essay scoring system for nonnative Japanese learners. Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020), pp. 1250–1257. European Language Resources Association

Hunt KW (1966) Recent Measures in Syntactic Development. Elementary English, 43(7), 732–739. http://www.jstor.org/stable/41386067

Ishioka T (2001) About e-rater, a computer-based automatic scoring system for essays [Konpyūta ni yoru essei no jidō saiten shisutemu e − rater ni tsuite]. University Entrance Examination. Forum [Daigaku nyūshi fōramu] 24:71–76

Hochreiter S, Schmidhuber J (1997) Long short- term memory. Neural Comput. 9(8):1735–1780

Article   CAS   PubMed   Google Scholar  

Ishioka T, Kameda M (2006) Automated Japanese essay scoring system based on articles written by experts. Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, Sydney, Australia, 17–18 July 2006, pp. 233-240. Association for Computational Linguistics, USA

Japan Foundation (2021) Retrieved from: https://www.jpf.gp.jp/j/project/japanese/survey/result/dl/survey2021/all.pdf

Jarvis S (2013a) Defining and measuring lexical diversity. In Jarvis S, Daller M (ed) Vocabulary knowledge: Human ratings and automated measures (Vol. 47, pp. 13–44). John Benjamins. https://doi.org/10.1075/sibil.47.03ch1

Jarvis S (2013b) Capturing the diversity in lexical diversity. Lang. Learn. 63:87–106. https://doi.org/10.1111/j.1467-9922.2012.00739.x

Jiang J, Quyang J, Liu H (2019) Interlanguage: A perspective of quantitative linguistic typology. Lang. Sci. 74:85–97

Kim M, Crossley SA, Kyle K (2018) Lexical sophistication as a multidimensional phenomenon: Relations to second language lexical proficiency, development, and writing quality. Mod. Lang. J. 102(1):120–141. https://doi.org/10.1111/modl.12447

Kojima T, Gu S, Reid M et al. (2022) Large language models are zero-shot reasoners. Advances in Neural Information Processing Systems, New Orleans, LA, 29 November-1 December, Curran Associates, Inc., Red Hook, NY

Kyle K, Crossley SA (2015) Automatically assessing lexical sophistication: Indices, tools, findings, and application. TESOL Q 49:757–786

Kyle K, Crossley SA, Berger CM (2018) The tool for the automatic analysis of lexical sophistication (TAALES): Version 2.0. Behav. Res. Methods 50:1030–1046. https://doi.org/10.3758/s13428-017-0924-4

Article   PubMed   Google Scholar  

Kyle K, Crossley SA, Jarvis S (2021) Assessing the validity of lexical diversity using direct judgements. Lang. Assess. Q. 18:154–170. https://doi.org/10.1080/15434303.2020.1844205

Landauer TK, Laham D, Foltz PW (2003) Automated essay scoring and annotation of essays with the Intelligent Essay Assessor. In Shermis MD, Burstein JC (ed), Automated Essay Scoring: A Cross-Disciplinary Perspective. Lawrence Erlbaum Associates, Mahwah, NJ

Landis JR, Koch GG (1977) The measurement of observer agreement for categorical data. Biometrics 159–174

Laufer B, Nation P (1995) Vocabulary size and use: Lexical richness in L2 written production. Appl. Linguist. 16:307–322. https://doi.org/10.1093/applin/16.3.307

Lee J, Hasebe Y (2017) jWriter Learner Text Evaluator, URL: https://jreadability.net/jwriter/

Lee J, Kobayashi N, Sakai T, Sakota K (2015) A Comparison of SPOT and J-CAT Based on Test Analysis [Tesuto bunseki ni motozuku ‘SPOT’ to ‘J-CAT’ no hikaku]. Research on the Acquisition of Second Language Japanese [Dainigengo to shite no nihongo no shūtoku kenkyū] (18) 53–69

Li W, Yan J (2021) Probability distribution of dependency distance based on a Treebank of. Japanese EFL Learners’ Interlanguage. J. Quant. Linguist. 28(2):172–186. https://doi.org/10.1080/09296174.2020.1754611

Article   MathSciNet   Google Scholar  

Linacre JM (2002) Optimizing rating scale category effectiveness. J. Appl. Meas. 3(1):85–106

PubMed   Google Scholar  

Linacre JM (1994) Constructing measurement with a Many-Facet Rasch Model. In Wilson M (ed) Objective measurement: Theory into practice, Volume 2 (pp. 129–144). Norwood, NJ: Ablex

Liu H (2008) Dependency distance as a metric of language comprehension difficulty. J. Cognitive Sci. 9:159–191

Liu H, Xu C, Liang J (2017) Dependency distance: A new perspective on syntactic patterns in natural languages. Phys. Life Rev. 21. https://doi.org/10.1016/j.plrev.2017.03.002

Loukina A, Madnani N, Cahill A, et al. (2020) Using PRMSE to evaluate automated scoring systems in the presence of label noise. Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, Seattle, WA, USA → Online, 10 July, pp. 18–29. Association for Computational Linguistics

Lu X (2010) Automatic analysis of syntactic complexity in second language writing. Int. J. Corpus Linguist. 15:474–496

Lu X (2012) The relationship of lexical richness to the quality of ESL learners’ oral narratives. Mod. Lang. J. 96:190–208

Lu X (2017) Automated measurement of syntactic complexity in corpus-based L2 writing research and implications for writing assessment. Lang. Test. 34:493–511

Lu X, Hu R (2022) Sense-aware lexical sophistication indices and their relationship to second language writing quality. Behav. Res. Method. 54:1444–1460. https://doi.org/10.3758/s13428-021-01675-6

Ministry of Health, Labor, and Welfare of Japan (2022) Retrieved from: https://www.mhlw.go.jp/stf/newpage_30367.html

Mizumoto A, Eguchi M (2023) Exploring the potential of using an AI language model for automated essay scoring. Res. Methods Appl. Linguist. 3:100050

Okgetheng B, Takeuchi K (2024) Estimating Japanese Essay Grading Scores with Large Language Models. Proceedings of 30th Annual Conference of the Language Processing Society in Japan, March 2024

Ortega L (2015) Second language learning explained? SLA across 10 contemporary theories. In VanPatten B, Williams J (ed) Theories in Second Language Acquisition: An Introduction

Rae JW, Borgeaud S, Cai T, et al. (2021) Scaling Language Models: Methods, Analysis & Insights from Training Gopher. ArXiv, abs/2112.11446

Read J (2000) Assessing vocabulary. Cambridge University Press. https://doi.org/10.1017/CBO9780511732942

Rudner LM, Liang T (2002) Automated Essay Scoring Using Bayes’ Theorem. J. Technol., Learning and Assessment, 1 (2)

Sakoda K, Hosoi Y (2020) Accuracy and complexity of Japanese Language usage by SLA learners in different learning environments based on the analysis of I-JAS, a learners’ corpus of Japanese as L2. Math. Linguist. 32(7):403–418. https://doi.org/10.24701/mathling.32.7_403

Suzuki N (1999) Summary of survey results regarding comprehensive essay questions. Final report of “Joint Research on Comprehensive Examinations for the Aim of Evaluating Applicability to Each Specialized Field of Universities” for 1996-2000 [shōronbun sōgō mondai ni kansuru chōsa kekka no gaiyō. Heisei 8 - Heisei 12-nendo daigaku no kaku senmon bun’ya e no tekisei no hyōka o mokuteki to suru sōgō shiken no arikata ni kansuru kyōdō kenkyū’ saishū hōkoku-sho]. University Entrance Examination Section Center Research and Development Department [Daigaku nyūshi sentā kenkyū kaihatsubu], 21–32

Taghipour K, Ng HT (2016) A neural approach to automated essay scoring. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, 1–5 November, pp. 1882–1891. Association for Computational Linguistics

Takeuchi K, Ohno M, Motojin K, Taguchi M, Inada Y, Iizuka M, Abo T, Ueda H (2021) Development of essay scoring methods based on reference texts with construction of research-available Japanese essay data. In IPSJ J 62(9):1586–1604

Ure J (1971) Lexical density: A computational technique and some findings. In Coultard M (ed) Talking about Text. English Language Research, University of Birmingham, Birmingham, England

Vaswani A, Shazeer N, Parmar N, et al. (2017) Attention is all you need. In Advances in Neural Information Processing Systems, Long Beach, CA, 4–7 December, pp. 5998–6008, Curran Associates, Inc., Red Hook, NY

Watanabe H, Taira Y, Inoue Y (1988) Analysis of essay evaluation data [Shōronbun hyōka dēta no kaiseki]. Bulletin of the Faculty of Education, University of Tokyo [Tōkyōdaigaku kyōiku gakubu kiyō], Vol. 28, 143–164

Yao S, Yu D, Zhao J, et al. (2023) Tree of thoughts: Deliberate problem solving with large language models. Advances in Neural Information Processing Systems, 36

Zenker F, Kyle K (2021) Investigating minimum text lengths for lexical diversity indices. Assess. Writ. 47:100505. https://doi.org/10.1016/j.asw.2020.100505

Zhang Y, Warstadt A, Li X, et al. (2021) When do you need billions of words of pretraining data? Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Online, pp. 1112-1125. Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.acl-long.90

Download references

This research was funded by National Foundation of Social Sciences (22BYY186) to Wenchao Li.

Author information

Authors and affiliations.

Department of Japanese Studies, Zhejiang University, Hangzhou, China

Department of Linguistics and Applied Linguistics, Zhejiang University, Hangzhou, China

You can also search for this author in PubMed   Google Scholar

Contributions

Wenchao Li is in charge of conceptualization, validation, formal analysis, investigation, data curation, visualization and writing the draft. Haitao Liu is in charge of supervision.

Corresponding author

Correspondence to Wenchao Li .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

Ethical approval was not required as the study did not involve human participants.

Informed consent

This article does not contain any studies with human participants performed by any of the authors.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplemental material file #1, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Li, W., Liu, H. Applying large language models for automated essay scoring for non-native Japanese. Humanit Soc Sci Commun 11 , 723 (2024). https://doi.org/10.1057/s41599-024-03209-9

Download citation

Received : 02 February 2024

Accepted : 16 May 2024

Published : 03 June 2024

DOI : https://doi.org/10.1057/s41599-024-03209-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

write an essay about 2019 governorship tribunal

IMAGES

  1. LAWS 121 terms essay 2019

    write an essay about 2019 governorship tribunal

  2. Essay on Judiciary

    write an essay about 2019 governorship tribunal

  3. Introduction of judges.docx

    write an essay about 2019 governorship tribunal

  4. Ann Richards' Life and Governorship

    write an essay about 2019 governorship tribunal

  5. Administrative Law Tribunal Essay

    write an essay about 2019 governorship tribunal

  6. United Kingdom's Tribunal System Reforms Essay Example

    write an essay about 2019 governorship tribunal

COMMENTS

  1. Kano's governorship electoral outcome: Recalling the 2019 edition, By

    On Wednesday, the Kano State Governorship Electoral Petition Tribunal sacked Abba Kabir Yusuf of the New Nigeria Peoples' Party (NNPP) as governor, after deducting 165,663 votes from the total ...

  2. Electoral Tribunals and Democratic Consolidation in Nigeria

    The recent Supreme Court judgment on the 2019 governorship tussle in Imo state has continued to generate mix reactions, arguments and counter arguments as to judicial neutrality. ... In the case of Abia State, the lower tribunal nullified the governorship election on the grounds that the governor and his deputy were not qualified to stand for ...

  3. Why a rise in court cases is bad for democracy in Nigeria

    Weaknesses in the system. Nigeria has a strong Electoral Act. It has been amended a few times over the years and it is not different from the electoral constitutions being used in other democratic ...

  4. (PDF) The 2019 General Election in Nigeria: Examining the Issues

    The United States Institute fo r Peace (2019) reported ahead of the conduct of th e 2019 General Electi on that the p otential risk for violence in the process are evidenced in soc ial and ...

  5. Reflections on the supreme court judgment on Bayelsa governorship

    "Where an election tribunal or court nullifies an election on the ground that the person who obtained the highest votes at the election was not qualified to contest the election, the election ...

  6. Tribunal judgment replay of 2019 script, says NNPP

    The New Nigeria Peoples Party has said that the judgment of the Kano Governorship Election Petition Tribunal is a replay script of the 2019 governorship election, where the tribunal allegedly ...

  7. 17 determined, 11 to go… how governorship tribunal delivered judgment

    The governorship election petition tribunal in Cross River state, on Tuesday, September 26, upheld the election of Bassey Otu as the duly elected governor of the state and dismissed the petition ...

  8. PDF Bayelsa State 2019 governorship election observation assessment

    damaged the freedom, fairness, and credibility of the 2019 Governorship election in Bayelsa State. However, most voters, INEC officials, security services, and civil society organisations are to be commended for their efforts to maintain a peaceful election. Prior to the election, the main political parties—People's Democratic Party (PDP) and

  9. As the Supreme Court reviews its decision on Imo state governorship

    By Chidi Anselm ODINKALU. WHEN it sits on Tuesday, 18 February 2020, to consider the application to review its judgment of 14 January in the matter of the petition on the outcome of the March 2019 governorship election in Imo State, Nigeria's Supreme Court will find itself presented with a signal moment to re-set the balance in the monumental enterprise of election dispute resolution in ...

  10. Oyo governorship tribunal to rule on Adelabu's application May 20

    The Oyo State Governorship Election Petition Tribunal has fixed May 20 to rule on an application by Adebayo Adelabu, APC governorship candidate News Agency of Nigeria reports that Mr Adelabu and his party are challenging the declaration by INEC, of Seyi Makinde, the PDP governorship candidate, winner of the March 9 election in Oyo state.

  11. Imo guber election: Four noteworthy points from Supreme Court's full

    The judgment of the lower court affirming the judgment of the Governorship Election Tribunal is hereby set aside." ... on January 10 filed a motion seeking to strike out Uzodinma's appeal at the Supreme Court because the court had in 2019 ruled that Uche Nwosu's nomination by two political parties was null and void. Ihedioha argued that ...

  12. Inside the courts and challenging election outcomes

    In contrast, appeals against the decision of a Governorship election tribunal lie to the Court of Appeal and from the Court of Appeal to the Supreme Court, which is the final arbiter. Lastly, appeals on National and State Assembly election tribunal judgments are filed at the Court of Appeal, the final Court for all appeals related to ...

  13. Imo State Governorship Election Petitions: Revisiting and Reversing of

    An Election Tribunal shall deliver its judgment in writing within 180 days from the date of the filing of the petition; c. An appeal from a decision of the Election Tribunal or Court shall be heard and disposed within 60 days from the date of the delivering of the judgment of the Tribunal; d. The Court in all appeals from election Tribunals may ...

  14. Tribunal Judgment: An Analysis Of Election Appeals Procedure In Nigeria

    Generally, by virtue of Section 285 (7) of the 1999 Constitution of the Federal Republic of Nigeria, an appeal from the decision of the Election Tribunal or Court of Appeal shall be heard and determined within 60 days from the date of the delivery of judgment at the tribunal or court of appeal. Section 6 of the Election Judicial Proceedings ...

  15. Supreme Court's ruling on Imo State governorship rouses historical

    The recent apex court's ruling on the 2019 governorship tussle in Imo State has continued to generate reactions. But looking back at the nation's political history shows a plethora of ...

  16. Governorship election petition tribunals' rulings in 22 states

    The governorship election petition tribunal in Lafia, Nasarawa State, sacked Abdullahi Sule of the APC as governor and declared David Ombugadu of the PDP as the election winner. Two justices out of three ruled on Monday, October 2, that Ombugadu won the election, while one dissented. Nasarawa State Governor Abdullahi Sule.

  17. Governorship tribunal receives 4 petitions in Nasarawa

    NAN reports that Mr Bello Mukhtar, the Secretary of the tribunal, said this in an interview with newsmen in Lafia on Tuesday. Mukhtar said that the petitions were filed by the governorship ...

  18. List of States Where Tribunal Judgements Have Altered Their

    The chairman of the Edo state governorship petition tribunal, Justice Peter Umeadi, in Benin City, in March 2008, ruled that Adams Oshiomhole of the Action Congress (AC) had proved his allegations of fraud, voter intimidation, multiple voter registration, over-voting and election violence.. The tribunal, therefore, held that the results were adversely affected and declared Oshiomhole governor ...

  19. Hours after...

    Hours after Tribunal sacks sitting Governor Duoye Diri and ordered that a fresh Governorship election be conducted in Bayelsa State yesterday, this particular video of the Prophet of Christ Mercyland...

  20. Free Answer Writing Practice Question For IAS Mains Exam 2019

    The government as well as the LG should be true to the concept of democratic principles. Otherwise, the constitutional scheme of the country of being democratic and republic would be defeated. Print PDF. Be Mains Ready - A free program for everyone to help with mains 2019 preparation. Answer writing practice question for ias mains exam 2019.

  21. Applying large language models for automated essay scoring for non

    A dataset comprising a total of 1400 essays from the story writing tasks was collected. ... words Wa and Wb, Wa is the governor and Wb is the dependent. ... Kyle K (2019) N-gram measures and L2 ...