Skip to main content

The 16th Edition of the Multi-Agent Programming Contest - The GOAL-DTU Team

  • Conference paper
  • First Online:
The Multi-Agent Programming Contest 2022 (MAPC 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13997))

Included in the following conference series:

  • 99 Accesses

Abstract

We provide an overview of the GOAL-DTU system for the Multi-Agent Programming Contest, including the overall strategy and how the system is designed to apply this strategy. Our agents are implemented using the GOAL programming language. We evaluate the performance of our agents in the contest and, finally, we discuss how to improve the system based on an analysis of its strengths and weaknesses.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Hindriks, K.V., Koeman, V.: The GOAL Agent Programming Language Home (2021). https://goalapl.atlassian.net/wiki

  2. Hindriks, K.V., de Boer, F.S., van der Hoek, W., Meyer, J.-J.C.: Agent programming with declarative goals. In: Castelfranchi, C., Lespérance, Y. (eds.) ATAL 2000. LNCS (LNAI), vol. 1986, pp. 228–243. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-44631-1_16

    Chapter  Google Scholar 

  3. Hindriks, K.V.: Programming rational agents in GOAL. In: El Fallah Seghrouchni, A., Dix, J., Dastani, M., Bordini, R.H. (eds.) Multi-Agent Programming, pp. 119–157. Springer, Boston (2009). https://doi.org/10.1007/978-0-387-89299-3_4

    Chapter  MATH  Google Scholar 

  4. Hindriks, K.V., Dix, J.: GOAL: a multi-agent programming language applied to an exploration game. In: Shehory, O., Sturm, A. (eds.) Agent-Oriented Software Engineering, pp. 235–258. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-642-54432-3_12

    Chapter  Google Scholar 

  5. Boss, N.S., Jensen, A.S., Villadsen, J.: Building multi-agent systems using Jason. Ann. Math. Artif. Intell. 59, 373–388 (2010). https://doi.org/10.1007/s10472-010-9181-2

    Article  MathSciNet  Google Scholar 

  6. Vester, S., Boss, N.S., Jensen, A.S., Villadsen, J.: Improving multi-agent systems using Jason. Ann. Math. Artif. Intell. 61, 297–307 (2011). https://doi.org/10.1007/s10472-011-9225-2

    Article  MATH  Google Scholar 

  7. Ettienne, M.B., Vester, S., Villadsen, J.: Implementing a multi-agent system in Python with an auction-based agreement approach. In: Dennis, L., Boissier, O., Bordini, R.H. (eds.) ProMAS 2011. LNCS (LNAI), vol. 7217, pp. 185–196. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-31915-0_11

    Chapter  Google Scholar 

  8. Villadsen, J., Jensen, A.S., Ettienne, M.B., Vester, S., Andersen, K.B., Frøsig, A.: Reimplementing a multi-agent system in Python. In: Dastani, M., Hübner, J.F., Logan, B. (eds.) ProMAS 2012. LNCS (LNAI), vol. 7837, pp. 205–216. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-38700-5_13

    Chapter  Google Scholar 

  9. Villadsen, J., et al.: Engineering a multi-agent system in GOAL. In: Cossentino, M., El Fallah Seghrouchni, A., Winikoff, M. (eds.) EMAS 2013. LNCS (LNAI), vol. 8245, pp. 329–338. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-45343-4_18

    Chapter  Google Scholar 

  10. Villadsen, J., From, A.H., Jacobi, S., Larsen, N.N.: Multi-agent programming contest 2016 - the Python-DTU team. Int. J. Agent-Oriented Softw. Eng. 6(1), 86–100 (2018)

    Article  Google Scholar 

  11. Villadsen, J., Fleckenstein, O., Hatteland, H., Larsen, J.B.: Engineering a multi-agent system in Jason and CArtAgO. Ann. Math. Artif. Intell. 84, 57–74 (2018). https://doi.org/10.1007/s10472-018-9588-8

    Article  MathSciNet  Google Scholar 

  12. Villadsen, J., Bjørn, M.O., From, A.H., Henney, T.S., Larsen, J.B.: Multi-agent programming contest 2018—the Jason-DTU team. In: Ahlbrecht, T., Dix, J., Fiekas, N. (eds.) MAPC 2018. LNCS (LNAI), vol. 11957, pp. 41–71. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-37959-9_3

    Chapter  Google Scholar 

  13. Jensen, A.B., Villadsen, J.: GOAL-DTU: development of distributed intelligence for the multi-agent programming contest. In: Ahlbrecht, T., Dix, J., Fiekas, N., Krausburg, T. (eds.) MAPC 2019. LNCS (LNAI), vol. 12381, pp. 79–105. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59299-8_4

    Chapter  Google Scholar 

  14. Jensen, A.B., Villadsen, J., Weile, J., Gylling, E.K.: The 15th edition of the multi-agent programming contest - the GOAL-DTU team. In: Ahlbrecht, T., Dix, J., Fiekas, N., Krausburg, T. (eds.) MAPC 2021. LNCS (LNAI), vol. 12947, pp. 46–81. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-88549-6_3

    Chapter  Google Scholar 

Download references

Acknowledgements

We thank the anonymous reviewers for helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jørgen Villadsen .

Editor information

Editors and Affiliations

A Team Overview: Short Answers

A Team Overview: Short Answers

1.1 A.1 Participants and Their Background

  • Who is part of your team?

    This year, 3 people were involved: Jørgen Villadsen (PhD), Jonas Weile (MSc student) and during the contest days also Markus Fridlev Schlenzig (BSc student). For earlier iterations of the code that we have built upon, also Alexander Birch Jensen and Erik Kristian Gylling have been involved.

  • What was your motivation to participate in the contest?

    To study multi-agent systems in a realistic, but simulated, environment and to enhance our knowledge of the GOAL agent programming language.

  • What is the history of your group? (course project, thesis, \(\ldots \))

    Our team name is GOAL-DTU. We participated in the contest in 2009 and 2010 as the Jason-DTU team, in 2011 and 2012 as the Python-DTU team, in 2013 and 2014 as the GOAL-DTU team, in 2015/2016 as the Python-DTU team, in 2017 and 2018 as the Jason-DTU team, and in 2019 and 2020/2021 as the GOAL-DTU team. We are affiliated with the Algorithms, Logic and Graphs section at DTU Compute, Department of Applied Mathematics and Computer Science, Technical University of Denmark (DTU). DTU Compute is located in the greater Copenhagen area. The main contact is associate professor Jørgen Villadsen, email: jovi@dtu.dk

  • What is your field of research? Which work therein is related?

    We are responsible for the Artificial Intelligence and Algorithms study line of the MSc in Computer Science and Engineering programme.

1.2 A.2 Statistics

  • Did you start your agent team from scratch, or did you build on existing agents (from yourself or another previous participant)?

    As our starting point, we used the code from the competition last year—which in turn built upon the competition in 2020.

  • How much time did you invest in the contest (for programming, organising your group, other)?

    We have spent approximately 60 h to further develop the code from the previous iteration.

  • How was the time (roughly) distributed over the months before the contest?

    Most of the time was spent in August leading up to the qualification. After qualifying, no further improvements were made.

  • How many lines of code did you produce for your final agent team?

    We have about 2000 lines of code.

1.3 A.3 Technology and Techniques

Did you use any of these agent technology/AOSE methods or tools? What were your experiences?

  • Agent programming languages and/or frameworks?

    We used GOAL which is a quite easy and intuitive agent programming language.

  • Methodologies (e.g. Prometheus)?

    No.

  • Notation (e.g. Agent UML)?

    No.

  • Coordination mechanisms (e.g. protocols, games, ...)?

    No.

  • Other (methods/concepts/tools)?

    We used the Eclipse IDE for programming (it has a GOAL add-on).

What hardware did you use during the contest? We used a laptop.

1.4 A.4 Agent System Details

  • Would you say your system is decentralised? Why?

    The team communicates via messages and channels to share information and agree on plans. The approach is mostly decentralized, but planning tasks are currently delegated to a single master agent.

  • Do your agents use the following features: Planning, Learning, Organisations, Norms? If so, please elaborate briefly.

    The agents use planning to choose the tasks to pursue. A single agent is chosen to do the planning, but this agent relies on input from all other agents, and the planning agent is chosen dynamically at run time. The planning agent will search through assignment combinations and choose the most promising.

  • How do your agents cooperate?

    The agents reactively decide on their actions based on the current percepts, their beliefs and their goals. They use predetermined rules and actions.

  • Can your agents change their general behaviour during run time? If so, what triggers the changes?

    An agent will change its behaviour when it is chosen to take part in solving a task.

  • Did you have to make changes to the team (e.g. fix critical bugs) during the contest?

    We chose not to make changes during the contest.

  • How did you go about debugging your system? What kinds of measures could improve your debugging experience?

    We used log files to record the agents belief base and percepts.

  • During the contest, you were not allowed to watch the matches. How did you track what was going on? Was it helpful?

    We only did basic logging to the console.

  • Did you invest time in making your agents more robust/fault-tolerant? How?

    As evident from the competition results, we did not spend enough time on this aspect.

1.5 A.5 Scenario and Strategy

  • How would you describe your intended agent behaviour? Did the actual behaviour deviate from that?

    First, to explore and have our agents find other agents, goal-zones and dispensers. Once most agents have connected, they collectively decide on a master agent to do planning. Once this agent has been found, it will continuously inquire the other agents about their available resources and try to create task plans. The task plan is sent to all agents involved in the plan, and these will try to solve it as efficiently as possible.

  • Why did your team perform as it did? Why did the other teams perform better/worse than you did?

    We defined the overall strategy. The task-plans are created autonomously. We would have liked more flexibility for the agents to evaluate their strategy and correct this strategy as needed.

  • Did you implement any strategy that tries to interfere with your opponents?

    We worked on some clearing strategies to defend goal cells, and to scare off opponents. However, they seemingly did more harm than good at the competition.

  • How do your agents coordinate assembling and delivering a structure for a task?

    The planning agent creates a structured plan describing which agent should deliver what blocks, based on the input it receives from other agents. All agents involved in delivering a task then continuously check the plan to see if it remains feasible, updating the plan if necessary.

  • Which aspect(s) of the scenario did you find particularly challenging?

    The map was made even more dynamic than preceding years, which was definitely a challenge.

  • What would you improve (wrt. your agents) if you wanted to participate in the same contest a week from now (or next year)?

    If the contest was a week from now, we would mainly focus on bug fixing and thorough testing. If we had a whole year, we would work on changing the way we solve tasks and do planning—we would further decentralize it, removing most of the responsibility of the planning agent, and make the assembling of blocks more dynamic. Also, we should make better use of agents not partaking in solving tasks, as well as improving defensive strategies

  • What can be improved regarding the scenario for next year? What would you remove? What would you add?

    We suggest to execute the agents on the same infrastructure by the organisers and then it would be interesting to decrease the time available for the agents to decide on their actions.

1.6 A.6 And the Moral of it is ...

  • What did you learn from participating in the contest?

    We learned a lot about using GOAL to write multi-agent programs. We were reminded of the care it takes to develop and test in multi-agent environments.

  • What advice would you give to yourself before the contest/another team wanting to participate in the next?

    Start early, because unexpected problems will occur. Have a clear testing strategy. The coordination between agents is working quite well and the A* path finding helps agents to move directly. Agents could be more flexible in helping each other and prioritizing other agents’ tasks over their own when it is better for the team.

  • Where did you benefit from your chosen programming language, methodology, tools, and algorithms?

    GOAL has built-in functionality that allows agents to communicate with one another and it has a predefined agent-cycle that is suitable for the belief-desire-intention model. A* was used by the agents to determine movement actions for short distances.

  • Which problems did you encounter because of your chosen technologies?

    Writing thorough tests for GOAL code can be challenging,

  • Which aspect of your team cost you the most time?

    Some unexpected problems (unrelated to the contest) ended up costing us a team member, and another team member had less time to work on the project than anticipated. This led to a large loss of potential time.

1.7 A.7 Looking into the Future

  • Did the warm-up match help improve your team of agents? How useful do you think it is?

    It was not really useful due to the lack of time for improvements.

  • What are your thoughts on changing how the contest is run, so that the participants’ agents are executed on the same infrastructure by the organisers? What do you see as positive or negative about this approach?

    Yes, it would be great if the agents are executed on the same infrastructure.

  • Do you think a match containing more than two teams should be mandatory?

    Maybe—perhaps if the agents are executed on the same infrastructure.

  • What else can be improved regarding the MAPC for next year?

    We would prefer more or less the same scenario.

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Villadsen, J., Weile, J. (2023). The 16th Edition of the Multi-Agent Programming Contest - The GOAL-DTU Team. In: Ahlbrecht, T., Dix, J., Fiekas, N., Krausburg, T. (eds) The Multi-Agent Programming Contest 2022. MAPC 2022. Lecture Notes in Computer Science(), vol 13997. Springer, Cham. https://doi.org/10.1007/978-3-031-38712-8_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-38712-8_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-38711-1

  • Online ISBN: 978-3-031-38712-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics