Exploring the Future of Research: The Agentic Workflow in Humanities and Social Sciences
Generative AI is increasingly becoming a focal point in the realm of knowledge work, generating diverse applications across a vast array of fields. However, much of the existing research emphasizes applications in software engineering and the natural sciences, leaving the humanities and social sciences somewhat underexplored. A recent study, referenced as arXiv:2602.17221v1, addresses this gap by introducing a novel methodological framework tailored for these areas, termed the Agentic Workflow.
Unpacking the Agentic Workflow
At the heart of this study lies the proposal of an AI agent-based collaborative research workflow, which facilitates a structured approach for integrating artificial intelligence into research projects. This methodology is designed as a "methodological experiment" aimed at creating a replicable framework for humanities and social science scholars. The framework unfolds across seven stages, focusing on three core principles: task modularization, human-AI division of labor, and verifiability.
Task Modularization
Task modularization involves breaking down the research process into manageable components. Each stage of the workflow delineates specified roles for both human researchers and AI agents. This clear division aids in the effective management of complex research tasks, allowing each participant to play to their strengths. Human researchers are tasked with making critical research judgments, while AI agents are leveraged for information retrieval and text generation.
Human-AI Division of Labor
The human-AI division of labor is critical in maximizing productivity and maintaining the quality of output. While AI excels at rapidly processing vast amounts of information and generating text, the irreplaceability of human judgment remains evident. Researchers are responsible for formulating research questions, interpreting theoretical frameworks, and providing ethical reflections throughout the study. This collaboration allows each participant to contribute effectively to the overall research goals.
Verifiability
Lastly, verifiability is an essential component of the Agentic Workflow. Ensuring that each stage of the research process can be tracked and evaluated not only enhances research credibility but also supports an iterative improvement of the method itself, driving greater rigor and transparency in results.
Empirical Analysis with Taiwan’s Claude.ai Data
To validate the proposed methodology, the study utilizes Taiwan’s Claude.ai usage data, encompassing 7,729 conversations collected in November 2025 from the Anthropic Economic Index (AEI). This empirical approach serves as a practical application of the Agentic Workflow and showcases its viability for secondary data research.
Operational Demonstration
The analysis of the AEI Taiwan data acts as a key operational demonstration of the framework. It illustrates not just the intricacies of the workflow, but also highlights the quality of outputs generated through this collaborative method. By documenting the research process reflexively, researchers can identify opportunities for iterative refinement and continuous improvement, offering insights into both the operational facets and the output quality across various research contexts.
Modes of Human-AI Collaboration
In addition to outlining the framework, the study identifies three operational modes of human-AI collaboration: direct execution, iterative refinement, and human-led approaches. This taxonomy deepens the understanding of how researchers can interact with AI throughout their projects.
Direct Execution
In a direct execution mode, human researchers might rely on AI agents to complete specific tasks with minimal oversight, allowing for rapid data processing and analysis. This approach is particularly beneficial for tasks that are repetitive or require high volume, such as gathering data from multiple sources.
Iterative Refinement
The iterative refinement mode emphasizes a collaborative approach where human feedback shapes AI outputs. Researchers can iteratively review and refine the work produced by the AI, ensuring that it aligns with their research objectives and theoretical frameworks. This mode fosters a dynamic feedback loop, enhancing the overall quality of research.
Human-Led
The human-led mode places human researchers at the forefront, using AI tools as supportive instruments rather than primary decision-makers. This approach underscores the critical role of human interpretation, especially in crafting nuanced research questions and ethical considerations.
Acknowledging Limitations
Despite its promising framework, the study does acknowledge several limitations. The reliance on a single-platform dataset may restrict generalizability. Additionally, the cross-sectional design poses challenges related to long-term applicability and trend analysis. Finally, concerns regarding AI reliability, including the potential for data inaccuracies, are also recognized, urging researchers to remain vigilant in their critical evaluations.
With generative AI continuously evolving, this study aims to pave the way for a new approach to humanities and social sciences research. By proposing a methodological framework that embraces these cutting-edge technologies, the Agentic Workflow offers scholars an innovative tool to navigate the complexities of modern research while retaining essential human oversight and ethical considerations.
Inspired by: Source

