Abschlussarbeiten
Inhalt
Bewerbung
Bewerbungen für eine Abschlussarbeit an der Professur für Digital Marketing können jederzeit eingereicht werden. Allerdings erfolgen Zu- und Absagen nur einmal pro Quartal.
Jeweils am Ende eines Quartals wird entschieden, welche Bewerbungen für das folgende Quartal angenommen werden können und von welchem wissenschaftlichen Mitarbeiter Sie betreut werden. Bitte beachten Sie, dass wir aufgrund der sehr hohen Anzahl von Bewerbungen nur eine begrenzte Anzahl von Studierenden annehmen und betreuen können. Sie werden per E-Mail in der unten angegebenen Kalenderwoche (KW) benachrichtigt, ob Ihre Bewerbung angenommen oder abgelehnt wurde.
Bitte sehen Sie von Vorabanfragen ab und reichen Sie gleich Ihre vollständigen Bewerbungsunterlagen ein. Wir können nur auf der Grundlage vollständiger Unterlagen über die Betreuung einer Abschlussarbeit entscheiden.
Bewerbungsfristen:
| Gewünschter Beginn der Abschlussarbeit | Bewerbungsschluss: | Mitteilung über Annahme oder Ablehnung: |
|---|---|---|
| Januar, Februar, März | Ende KW 49 | Ende KW 50 |
| April, Mai, Juni | Ende KW 10 | Ende KW 11 |
| Juli, August, September | Ende KW 23 | Ende KW 24 |
| Oktober, November, Dezember | Ende KW 36 | Ende KW 37 |
Zur Bewerbung füllen Sie bitte dieses Online-Formular aus.
Themen
Die Auswahl eines Themas erfolgt in enger Abstimmung mit der Professur. Die unten aufgeführten Themen können grundsätzlich sowohl für Bachelor- als auch für Masterarbeiten (inkl. MBA und EMBA) verwendet werden, wobei sich Umfang und Detailgrad der Arbeit entsprechend unterscheiden.
Praxisarbeiten sind möglich, wobei Sie Praxispartner selbst vorschlagen können. Hierbei sollte ein konkretes Problem aus der Praxis aufgegriffen werden, ohne dabei den essentiellen wissenschaftlichen Anspruch der Arbeit zu vernachlässigen.
Wir bieten Ihnen aktuell zwei Arten von Themen an:
Allgemeine Forschungsthemen: Diese dienen als Inspiration. Hierbei definieren Sie den konkreten Forschungsfokus und die Methode in Ihrem Exposé selbst.
Spezifische Forschungsthemen: Diese Themen haben bereits einen vordefinierten Rahmen, auf den Sie sich in Ihrem Exposé direkt beziehen können.
Darüber hinaus sind eigene Themenvorschläge in allen Bereichen möglich, die einen Bezug zum Digital Marketing aufweisen.
Für allgemeine und selbst vorgeschlagene Themen ist ein Exposé (1–3 Seiten) erforderlich. Eine beispielhafte Anleitung des Lehrstuhls für Controlling finden Sie hier, um zu sehen, wie das Exposé aufgebaut sein sollte. Dieses Exposé bildet die Grundlage für die Themenfindung und -abstimmung zwischen Ihnen, der Professur und gegebenenfalls Praxispartnern.
Allgemeine Forschungsthemen
Diese Themen dienen als Inspiration für Ihr Exposé. Sie sollten ein konkretes Thema ausarbeiten, einschließlich theoretischer Relevanz und Methode. Wir begrüßen insbesondere quantitative und datenintensive Forschung.
Supervisor: Leonard Kinzinger
Details & Focus: Digital Twins represent lifelike simulations of real consumers, built by conditioning large language models on granular socio-economic data. They promise to enable marketers and researchers to forecast reactions, compare strategies, and conduct experiments that would be costly or impossible with traditional survey methods.
Current Research Focus:
- Understanding, detecting, and mitigating biases in digital twins
- Developing a foundation model specifically optimized for digital twins (see Topic “Developing a Foundation Model Optimized for Digital Twins”)
- Extending digital twins to multimodal advertising content (text, images, audio, video)
- Evaluating the accuracy and fidelity of digital twin responses across diverse marketing tasks
- Designing robust validation frameworks comparing digital twins to real consumer behavior
Sources:
- Goli, A., & Singh, A. (2024). Frontiers: Can large language models capture human preferences?. Marketing Science, 43(4), 709-722. Link
- Li, P., Castelo, N., Katona, Z., & Sarvary, M. (2024). Frontiers: Determining the validity of large language models for automated perceptual analysis. Marketing Science, 43(2), 254-266. Link
- Toubia, O., Gui, G. Z., Peng, T., Merlau, D. J., Li, A., & Chen, H. (2025). Database report: Twin-2k-500: A data set for building digital twins of over 2,000 people based on their answers to over 500 questions. Marketing Science, 44(6), 1446-1455. Link
- Peng, T., Gui, G., Merlau, D. J., Fan, G. J., Sliman, M. B., Brucks, M., ... & Toubia, O. (2025). A mega-study of digital twins reveals strengths, weaknesses and opportunities for further improvement. arXiv preprint arXiv:2509.19088. Link
- Park, J. S., Zou, C. Q., Shaw, A., Hill, B. M., Cai, C., Morris, M. R., ... & Bernstein, M. S. (2024). Generative agent simulations of 1,000 people. arXiv preprint arXiv:2411.10109. Link
- Binz, M., Akata, E., Bethge, M., Brändle, F., Callaway, F., Coda-Forno, J., ... & Schulz, E. (2025). A foundation model to predict and capture human cognition. Nature, 1-8. Link
- Argyle, L. P., Busby, E. C., Fulda, N., Gubler, J. R., Rytting, C., & Wingate, D. (2023). Out of one, many: Using language models to simulate human samples. Political Analysis, 31(3), 337-351. Link
Supervisors: Benedikt Roder, Shihong Zhang, Leonard Kinzinger
Details & Focus: Generative AI has rapidly expanded beyond text and images, with today’s models producing high-quality music, audio, and full songs at scale. Suno alone generates around seven million tracks per day, many of which are uploaded to DSPs like Spotify, Apple Music, and Deezer. This topic examines how GenAI reshapes music creation, distribution, and listener perceptions in an era where AI-generated audio is becoming mainstream.
Current Research Focus:
- Benchmarking automatic music description methods for music generation systems (see Topic “Benchmarking Automatic Music Description Methods for Music Generation Systems”)
- How AI-generated music is perceived by listeners
- How disclosure affects marketing effectiveness
- How advances in model architectures and training methods shape musical quality and user acceptance
- Whether AI-generated voice-overs trigger an “uncanny valley” effect similar to virtual influencers, where near-human realism can become subtly unsettling (Topic “The Uncanny Valley of AI-Generated Voice-Overs”)
Sources:
- Evans, Z., Carr, C. J., Taylor, J., Hawley, S. H., & Pons, J. (2024, February). Fast timing-conditioned latent audio diffusion. In Forty-first International Conference on Machine Learning. Link
- Efthymiou, F., Hildebrand, C., de Bellis, E., & Hampton, W. H. (2024). The power of AI-generated voices: How digital vocal tract length shapes product congruency and ad performance. Journal of Interactive Marketing, 59(2), 117-134. Link
- Datta, H., Knox, G., & Bronnenberg, B. J. (2018). Changing their tune: How consumers’ adoption of online streaming affects music consumption and discovery. Marketing Science, 37(1), 5-21. Link
- Choi, Y., Moon, J., Yoo, J., & Hong, J. H. (2025, April). Understanding the Potentials and Limitations of Prompt-based Music Generative AI. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (pp. 1-15). Link
- Chu, H., Kim, J., Kim, S., Lim, H., Lee, H., Jin, S., ... & Ko, S. (2022, October). An empirical study on how people perceive AI-generated music. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management (pp. 304-314). Link
- Shank, D. B., Stefanik, C., Stuhlsatz, C., Kacirek, K., & Belfi, A. M. (2023). AI composer bias: Listeners like music less when they think it was composed by an AI. Journal of Experimental Psychology: Applied, 29(3), 676. Link
- Hong, J. W., Fischer, K., Ha, Y., & Zeng, Y. (2022). Human, I wrote a song for you: An experiment testing the influence of machines’ attributes on the AI-composed music evaluation. Computers in Human Behavior, 131, 107239. Link
Supervisor: Xiongkai Tan
Details & Focus: This topic focuses on extracting structured information from unstructured data (e.g., text, images), such as social media posts, consumer reviews, and advertisements, to inform business decisions. It introduces state-of-the-art techniques, including traditional machine learning and multimodal large language models, for automated feature extraction, content classification, the construction of behavioral or perceptual measures, etc.
Sources:
- Automated Image Analysis (AIA). Link
- Using natural language processing to analyse text data in behavioural science. Link
- Scaling Open-Vocabulary Object Detection. Link
- Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection. Link
- Grounded Language-Image Pre-training (arxiv.org). Link
Spezifische Forschungsthemen
Sie können eines oder mehrere dieser spezifischen Themen wählen. Bitte erläutern Sie in Ihrem Exposé, wie Sie das Forschungsdesign und die Datenerhebung gestalten würden.
Supervisor: Leonard Kinzinger, Shihong Zhang
Details & Focus: Large Language Models (LLMs) are increasingly capable of handling modalities beyond text, including music. Accurately describing what a piece of music sounds like remains a challenge, especially for untrained listeners. Yet, such descriptions are central to how humans interact with music generation systems like Suno, ElevenLabs, or MusicGen. They also play a key role in training these systems to produce outputs coherent with user inputs. This thesis will benchmark different approaches for generating music descriptions, including pipelines based on open-source feature extraction models and alternative methods. The evaluation will compare approaches in terms of prompt coherence for skilled and unskilled listeners, as well as user satisfaction, to identify best practices for effective and user-friendly music description.
Sources:
- Evans, Z., Carr, C. J., Taylor, J., Hawley, S. H., & Pons, J. (2024, February). Fast timing-conditioned latent audio diffusion. In Forty-first International Conference on Machine Learning. Link
- Choi, Y., Moon, J., Yoo, J., & Hong, J. H. (2025, April). Understanding the Potentials and Limitations of Prompt-based Music Generative AI. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (pp. 1-15). Link
- Chu, H., Kim, J., Kim, S., Lim, H., Lee, H., Jin, S., ... & Ko, S. (2022, October). An empirical study on how people perceive AI-generated music. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management (pp. 304-314). Link
- Hong, J. W., Fischer, K., Ha, Y., & Zeng, Y. (2022). Human, I wrote a song for you: An experiment testing the influence of machines’ attributes on the AI-composed music evaluation. Computers in Human Behavior, 131, 107239. Link
Supervisor: Sara Caprioli
Details & Focus: Thanks to advances in ‘generative AI’ algorithms, AI companions are now commercially available. These are applications that utilize artificial intelligence (AI) to offer consumers the opportunity to engage in emotional interactions like friendship and romance. Although these systems are truly incapable of feeling real emotions, concern, or caring, they can generate language that creates the perception of empathy. Still, there is still a lot to explore about how AI companions affect our well-being. Initial studies show that the interaction with AI companions can reduce loneliness for example. In your thesis, you can explore further consequences on individuals' well-being after they interact with an AI companion (e.g., you can look at their social motivation or their self-esteem/self-acceptance). Suggested method: quantitative secondary data analysis or experiment.
Sources:
Supervisor: Sara Caprioli
Details & Focus: Generative AI is becoming an increasingly common part of our daily lives: from chatbots that draft emails and summarize documents, to image generators that produce artwork or product design, GenAI is changing how people approach creative tasks. While AI is rapidly advancing in capabilities, there is still much to learn about how they influence consumer perceptions, behaviors, and their self-concept. Your thesis could investigate how consumers respond to AI in a specific context of your choice. You might examine how using AI tools shapes users’ self-perceptions, behaviors and/or performance over time (e.g., users´ feelings of competence, creativity, as well as performance before, during, and after using AI; the task's expected versus actual creativity, quality, as well as usefulness etc.). Suggested method: experiment.
Sources:
- Human confidence in artificial intelligence and in themselves: The evolution and impact of confidence on adoption of AI advice. Link
- Human creativity in the age of llms: Randomized experiments on divergent and convergent thinking. Link
- Lower artificial intelligence literacy predicts greater AI receptivity. Link
- Not all AI is created equal: A meta-analysis revealing drivers of AI resistance across markets, methods, and time. Link
Supervisor: Leonard Kinzinger
Details & Focus: Recent advances in generative AI, particularly video generation models such as Google Veo 3, OpenAI Sora, and Bytedance Seeddance 1.0, allow users to transform static images into short, immersive animations in under a minute and at very low cost. When conditioned correctly, these models can subtly animate secondary elements of an image while keeping central elements static. This creates engaging motion-enhanced assets that promise to better engage users, while respecting the original artworks. This thesis will explore how consumers react to these immersive animations compared to their static counterparts. You may investigate how such motion-augmented assets influence engagement, click behavior, brand recall, and perceived creativity or quality. Suggested method: online experiment or A/B testing.
Sources:
- Cian, L., Krishna, A., & Elder, R. S. (2014). This logo moves me: Dynamic imagery from static images. Journal of marketing research, 51(2), 184-197. Link
- Jia, H., Kim, B. K., & Ge, L. (2020). Speed up, size down: How animated movement speed in product videos influences size assessment and product evaluation. Journal of Marketing, 84(5), 100-116. Link
- Bashirzadeh, Y., Mai, R., & Faure, C. (2022). How rich is too rich? Visual design elements in digital marketing communications. International journal of research in marketing, 39(1), 58-76. Link
- Stuppy, A., Landwehr, J. R., & McGraw, A. P. (2024). The art of slowness: Slow motion enhances consumer evaluations by increasing processing fluency. Journal of Marketing Research, 61(2), 185-203. Link
Supervisor: Leonard Kinzinger
Details & Focus: Digital Twins are lifelike simulations of real consumers created by conditioning large language models on granular socio-economic data. They promise to enable marketers and researchers to forecast reactions, compare strategies, and conduct experiments that are difficult or costly to run with real respondents. However, current Digital Twins often behave noticeably differently from their human counterparts. Humans themselves exhibit well-documented cognitive biases and behavioral patterns that deviate from rational decision-making, as shown extensively in behavioral economics. In contrast, today’s Digital Twins introduce an additional layer of model-driven biases stemming from their training and post-processing: for example progressive bias, over-positivity and agreeableness, innovation friendliness, perfect knowledge, and overly reasonable responses.
One of our hypotheses is that some of these biases stem from the fact that most Digital Twins are built on top of instruction-fine-tuned models (for example from OpenAI GPT5 or Google Gemini 2.5), which are intentionally optimized to be friendly, harmless, and agreeable. This thesis will explore whether Digital Twins can be better aligned with human behavior by fine-tuning a base model (pre-trained model without post-training) on market research data and comparing its performance to that of instruction-fine-tuned models. Suggested method: model fine-tuning, behavioral evaluation, and quantitative comparison against real human data.
Sources:
- Binz, M., Akata, E., Bethge, M., Brändle, F., Callaway, F., Coda-Forno, J., ... & Schulz, E. (2025). A foundation model to predict and capture human cognition. Nature, 1-8. Link
- Park, J. S., Zou, C. Q., Shaw, A., Hill, B. M., Cai, C., Morris, M. R., ... & Bernstein, M. S. (2024). Generative agent simulations of 1,000 people. arXiv preprint arXiv:2411.10109. Link
- Li, B., Wei, Q. O., & Wang, X. S. (2025). Predicting Behaviors with Large Language Model (Llm)-Powered Digital Twins of Customers. Xin (Shane), Predicting Behaviors with Large Language Model (Llm)-Powered Digital Twins of Customers (May 15, 2025). Link
Supervisor: Leonard Kinzinger
Details & Focus: AI-generated voice-overs are becoming increasingly realistic, approaching human-like tone, pacing, and emotional expression. Yet, similar to virtual influencers, these near-human outputs may evoke an “uncanny valley” effect, where voices that are almost but not fully human sound subtly unsettling to listeners. This thesis will investigate how audiences perceive AI-generated voices across different levels of realism, whether certain imperfections increase or reduce discomfort, and how this influences engagement, trust, and marketing effectiveness. Suggested methods: experiment or perceptual rating study.
Sources:
- Blut, M., Wang, C., Wünderlich, N. V., & Brock, C. (2021). Understanding anthropomorphism in service provision: a meta-analysis of physical robots, chatbots, and other AI. Journal of the academy of marketing science, 49(4), 632-658. Link
- Xu, Z., Liu, S., Zhang, S., & Yang, Y. (2026). Decoding consumer responses to anthropomorphic products using electroencephalography, skin conductance, and eye-tracking. Journal of Retailing and Consumer Services, 89, 104618. Link
- Wang, X., Zhang, Z., & Jiang, Q. (2024). The effectiveness of human vs. AI voice-over in short video advertisements: A cognitive load theory perspective. Journal of Retailing and Consumer Services, 81, 104005. Link
- Hu, P., Gong, Y., Lu, Y., & Ding, A. W. (2023). Speaking vs. listening? Balance conversation attributes of voice assistants for better voice marketing. International Journal of Research in Marketing, 40(1), 109-127. Link
Supervisor: Shihong Zhang
Details & Focus: Exploring how diverse music and audio features such as rhythm, timbre, melodic structure, emotional tone, and lyrical content can be systematically categorized and analyzed to support marketing-related tasks. Emphasis on constructing a taxonomy of meaningful audio attributes that are behaviorally or semantically relevant to consumer perception, brand fit, and emotional resonance. Comparative evaluation of existing feature extraction methods across modalities including audio signals, lyrics, and symbolic music under unified benchmarks. The goal is to bridge music information retrieval with marketing analytics by providing interpretable, reusable, and task-oriented audio representations.
Sources:
Supervisor: Shihong Zhang
Details & Fokus: Systematic Taxonomy and Multi-Task Integration in advanced audio processing constitute the core focus of this program, rigorously addressing the structural fragmentation within current toolchains by establishing a market-driven foundation for solution development. The project begins by developing a systematic taxonomy of foundational audio processing tasks, rigorously informed by quantitative market share and demand analysis. This phase establishes the commercial desiderata that guide the subsequent methodological contribution: the design and implementation of a Unified Multi-Task Inference Framework. The research culminates in the formalization of this framework's design principles and an empirical evaluation of its workflow utility, content scalability, and market viability when benchmarked against disaggregated, task-specific AI solutions, providing critical insight into AI adoption barriers in creative content production.
Sources:
Dauer
Entsprechend der Prüfungsordnung für Wirtschaftswissenschaften beträgt die Bearbeitungsdauer:
- Masterarbeit (TUM-BWL/TUM-WIN/TUM-NAWI/TUM-WITEC/MBA/EMBA): 6 Monate
- Bachelorarbeit: 3 Monate
Umfang
- Masterarbeit: 45 Seiten +/- 10% (inkl. Quellenverzeichnis)
- Bachelorarbeit: 30 Seiten +/- 10% (inkl. Quellenverzeichnis)
Abgabe
Die Abgabe erfolgt per Email an das Grade Management (Email) und nicht an der Professur. Die Weiterleitung durch das Grade Management an den/die Betreuende erfolgt nach Überprüfung und Freigabe durch das Grade Management:
Einzureichen sind:
- Thesis mit unterschriebener ehrenwörtlichen Erklärung/Declaration of authorship (digitale Unterschrift ist ausreichend)
- Einsichtnahmeerklärung/Permission to view… (als extra PDF) https://www.wi.tum.de/downloads/
Verteidigung
Bei MBA-Masterarbeiten erfolgt nach Abgabe der schriftlichen Arbeit die mündliche Verteidigung. Bitte vereinbaren Sie dazu rechtzeitig (bis spätestens vier Wochen vorher) einen Termin. Für alle anderen Abschlussarbeiten ist eine Präsentation nicht verpflichtend, kann aber von der Professur-Betreuung verlangt werden.
JUMS Veröffentlichung
Junior Management Science ist ein akademisches Journal, dass herausragende Abschlussarbeiten in Business und Management veröffentlicht. Für mehr Details informieren Sie sich gerne auf der JUMS website.
Buddy Programm
Auf Wunsch bieten wir allen Studierenden, die eine Abschlussarbeit an der Professur für Digital Marketing schreiben, die Teilnahme in unserem Buddy Programm an. Hierzu setzen wir Sie mit anderen Studierenden in Verbindung, die ein ähnliches Themengebiet in Ihrer Abschlussarbeit bei uns bearbeiten, um Ihnen zu ermöglichen, Erfahrungen und nützliche Tipps auszutauschen.