2026.04.15
What is Chu-Katsu Bot?--An AI System for Practicing Job Interviews
- Hisashi Suzuki
- Professor, Faculty of Science and Engineering, Chuo University
Areas of Specialization: Machine Psychology, Cybernetics, Information Theory, Artificial Intelligence, and Medical Engineering
1. Diversity in artificial intelligence research
If you enter "The Japanese Society for Artificial Intelligence--What's AI?" into a search engine, find the "What's AI" page on which the Society provides an easy-to-understand explanation of AI, and then click on "Artificial Intelligence Research",[1] you will see that research on AI encompasses a wide range of technologies including genetic algorithms, expert systems, speech recognition, image recognition, affective computing, machine learning, games, natural language processing, information retrieval, reasoning, searching, knowledge representation, data mining, neural networks, human interfaces, planning, multi-agent systems, and robotics.
For example, neural networks represented by the recently popular deep learning are observable in the sense that the memory layers corresponding to their internal state can be examined. Although neural networks can repeatedly learn through trial and error to avoid incorporating bias or generating hallucinations (false or erratic outputs), the networks are not fully controllable in the sense that their layers can be freely adjusted to produce a desired output. In contrast, expert systems, which explicitly verbalize craftsmanship or expert knowledge and compile it into rule-based form, are both observable and controllable.
2. We will be ruled by AI or master it as a tool.
For example, when a non-native speaker learns English, the first step is normally to acquire grammar and other elements systematically as a language. Next, through studying abroad and similar experiences, the learner acquires practical communication skills as a form of language art. In the same way, practical AI is not simply a choice between memory-based learning and rule-based systems; rather, it is common to operate AI by balancing the two.
However, human beings have a tendency to neglect thinking for themselves. As such, we sometimes assert that learning a foreign language is pointless because machine translation exists, or mistake driving assistance functions in automobiles for fully autonomous driving. We may also assume that reports, papers, and entry sheets will all become uniform if we rely on generative AI instead of writing them ourselves. In any case, it seems that we are prone to imagining ourselves as abandoning control over AI and being ruled by machines.
From a technical standpoint, there is no link between the issue of whether the internal state of autonomously functioning AI can be observed and the issue of whether that internal state can be modified. Moreover, there is no need to link those issues. While it is true that the internal state of commercial AI is not disclosed, or that even if the code of an AI is open and its internal state is observable, users may not be interested in it, this does not mean that users must become intellectually passive regarding AI outputs. Just as vehicles are used for rapid long-distance travel, excavators or micromachines for transforming the scale of force, or air conditioners for maintaining indoor temperature, we must master AI as a tool that supports analytical thinking, including reasoning, in accordance with our own conscience.
3. Support for objective decision-making based on norms
For example, in extreme situations such as infectious disease outbreaks or disasters, staff are forced to perform emotionally exhausting triage. Suppose we imagine neurons that, as the internal state, store degrees of affirmation for "a severely injured person," "a mildly injured person," "another mildly injured person," "survival," and "treat." Also assume that we use machine learning to have machine learning store degrees of affirmation such as "a severely injured person rarely survives even with treatment," "a severely injured person without treatment never survives," "a mildly injured person with treatment always survives," "even without treatment, a mildly injured person tends to survive," "another mildly injured person with treatment always survives," and "another mildly injured person without treatment also tends to survive." Finally, suppose that we define the physical limits of life-saving resources in the form of rules such as "treat the severely injured person and neither mildly injured person can be treated," "treat one mildly injured person and the severely injured person cannot be treated," and "treat the other mildly injured person and the severely injured person cannot be treated." In this example, what decision would be made by AI?
A certain AI[2], [3] can be controlled by a simple norm similar to the Three Laws of Robotics. Specifically, suppose that we add a rule of egalitarianism--for example, "injured persons who will not survive without treatment must be treated"--to be applied equally without regard to age or other factors. Also assume that the degrees of affirmation of each neuron are quasi-optimized to satisfy a series of constraints. Under such conditions, the AI tends to make decisions which prioritize treating the severely injured. Conversely, if we were to add a rule of utilitarianism--for example, "all injured persons must survive"--with the aim of maximizing the number of survivors, AI decision-making tends to prioritize treating the two mildly injured persons, but the motivation to provide life-saving care decreases slightly. If we add a triage rule that combines both principles--for example, "injured persons who will not survive without treatment must be treated; at the same time, all injured persons must survive"--, decision-making becomes a utilitarian approach leaning toward egalitarianism, while there is no significant decrease in motivation to provide life-saving care.
4. Chu-Katsu Bot--an AI system for practicing job interviews
By designing and experimenting with AI in this way, it becomes clear that in extreme situations (that is, situations which involve the paradox that treating the severely injured first does not always maximize the number of survivors, while treating the mildly injured first can sometimes reduce the motivation to provide life-saving care), triage serves as a practical methodology of ethical norms that regulate the decline in motivation to provide life-saving care. Ultimately, triage can be operated even without AI, but if doing so imposes a mental burden on life-saving staff, it may be acceptable to intentionally delegate decision-making to AI. In the above example, I used an extreme situation in order to make the essence of the problem easier to understand. However, even in everyday circumstances, individuals seem to face a choice in regard to the use of AI. That choice is to either uniformly reject black-box AI as untrustworthy, or cultivate our own emotional resilience to use AI as a tool that allows a measure of human judgment and diversity, thereby maintaining economic activity while striving to lead a mentally fulfilling life.
The Chuo University Career Center has introduced the web-based AI job interview practice system "Chu-Katsu Bot" (version 2.0 at the time of this article's publication).[4], [5] Chu-Katsu Bot evaluates the level of each competency (here, "competency" refers to acquired thinking and behavioral traits)[6] that appears in the episode described by the user and then provides guidance on how to improve those competencies. Since Chu-Katsu Bot incorporates counseling techniques such as the Barnum effect and mirroring, it serves as a reflection of the user's traits in thinking. A user from the generation of students familiar with AI commented that conversing with the Chu-Katsu Bot avatar provides new personal insights and makes the job-hunting process enjoyable. Perhaps such feelings are merely a temporary response. Even so, users may feel motivated to do their best after receiving insightful comments from Chu-Oji, the Chuo University mascot who serves as the interview avatar. The AI Chu-Oji is a well-intentioned friend, not a sinister agent plotting to disrupt humanity through the use of fake information or bias, after all.
[1] The Japanese Society for Artificial Intelligence, Artificial Intelligence Research. https://www.ai-gakkai.or.jp/whatsai/AIresearch.html
[2] Suzuki, H. and Nishikawa, A. "Algebraic Reproduction of Triage Decision Making Based on Heuristics of Boolean Multivalued Logic," IEICE Technical Report, Vol. 124, No. 426, NLC2024-30, pp. 19 to 24, March 2025. https://ken.ieice.org/ken/paper/20250309uc90/
[3] Watanabe, T., Nishizaka, T., Suzuki, K., Nishikawa, A., Katai, H. and Suzuki, H. "Algebraic Reproduction of Triage Decision Making on Simple Heuristics of Boolean Multivalued Logic," 2025 IEEE Symposium on Computational Intelligence in Health and Medicine (CIHM), Trondheim, Norway, pp. 1 to 7, March 2025. https://doi.org/10.1109/CIHM64979.2025.10969480
[4] Chuo University, Announcement of the Release of Chu-Katsu Bot 2.0--An AI System for Practicing Job Interviews, March 26, 2025. https://www.chuo-u.ac.jp/career/center/science/news/2025/03/79437/
[5] Chuo University, The First for Universities! The Career Center Introduces Chu-Katsu Bot, an AI System for Practicing Job Interviews, April 15, 2024. https://www.chuo-u.ac.jp/career/center/science/news/2024/04/70786/
[6] Chuo University, List of Competency Definitions. https://www.chuo-u.ac.jp/gp/competency_pro/competency/definition/
Hisashi Suzuki/Professor, Faculty of Science and Engineering, Chuo University Areas of Specialization: Machine Psychology, Cybernetics, Information Theory, Artificial Intelligence, and Medical Engineering Hisashi Suzuki was born in
Hisashi Suzuki was born in Miyagi Prefecture. He completed the Master’s Program in the Department of Biophysical Engineering of the Physical Science Course, Graduate School of Engineering Science, Osaka University in 1985. He completed the Doctoral Program in the Department of Mechanical Engineering of the Physical Science Course, Graduate School of Engineering Science, Osaka University in 1988. He holds a Ph.D. in engineering. After serving as Research Assistant in the Department of Mechanical Engineering, Faculty of Engineering Science, Osaka University, Research Assistant and then Lecturer in the Department of Mathematical Engineering and Information Physics, Faculty of Engineering, the University of Tokyo, Associate Professor in the Faculty of Computer Science and Systems Engineering, Kyushu Institute of Technology, and Associate Professor in the Department of Information and System Engineering, Faculty of Science and Engineering, Chuo University, he assumed his current position in 1999.
His current research themes include 3D reconstruction using stereoscopic endoscopes, a machine-psychological approach to the trolley problem, and AI support for pathological diagnosis.
His major publications include Foundations of Knowledge Information Processing - Multivalued Logic Processing with C (Baifukan, 1999) and more.