Summary of "IA na Pesquisa Científica: O Que NÃO Automatizar | Workflow com SciSpace | Prof. Ricardo Limongi"
Summary — main ideas, concepts and lessons
Central thesis
- Transformer-based AI (since 2017) is a powerful research assistant but must not replace core scientific thinking.
- Humans should retain responsibility for hypothesis generation, critical reasoning, interpretation, and final authorship.
- Use AI to automate tedious, repetitive, and technical tasks; do not outsource the intellectual and ethical parts of research (formulating questions, building arguments, evaluating contributions).
Key concepts explained
- Transformer models (2017) shifted language understanding from lexical matching to semantic/contextual representations, enabling AI to read and synthesize large volumes of scientific literature.
- Machine learning can detect repetition and patterns across vast corpora; platforms trained on millions of papers can suggest connections, summarize, and surface relevant work.
- Memory and cognitive effort matter: neuroscience suggests genuine knowledge requires active engagement and memory formation — passively accepting AI summaries undermines learning and original insight.
- Hallucination and lack of judgment: models can generate plausible but incorrect outputs and lack critical thinking; outputs must be verified by researchers.
- Ethics and transparency: declare AI use in manuscripts and follow journal/institutional guidelines; transparency about how AI was used is the researcher’s responsibility.
Practical advantages of AI in research (what to automate)
- Literature discovery at scale (semantic search across focused scientific databases).
- Initial synthesis and exploratory overviews (suggested research problems, topical clusters).
- Indexing and cataloguing articles into spreadsheets/tables (metadata, citation counts, filters).
- Retrieving PDFs by connecting institutional libraries or requesting authors.
- Generating structured summary tables (contributions, limitations, methods).
- Converting content to alternate modalities (audio/podcast) to aid comprehension.
- Reformatting and adapting manuscripts to journal styles (formatting/format conversion).
- Running repetitive workflows via agents/tasks (multi-step literature reviews, comparative studies, mapping funding opportunities).
- Creating reproducible logs of steps taken (search sources, filters, and steps).
What not to automate (what must remain human)
- Formulating the original research question, hypothesis, and theoretical framing.
- Deep reading, interpretation, and critical synthesis of full papers (not only abstracts or AI summaries).
- Judgment about an article’s scientific quality, novelty, and contribution.
- Ethical decisions, integrity, and authorship claims — do not misrepresent AI-generated content as your own.
- Final writing of intellectual content, argumentative structure, and scholarly claims (AI can help edit or improve expression, especially for non-native English speakers, but should not generate primary content).
- Responsible disclosure: be explicit and honest about what AI did and did not do in your research.
Detailed recommended workflow
Before starting
- Define a clear, original research question (human-driven).
- Become literate about the scientific and editorial process (how journals evaluate quality, what counts as solid references).
Using a research-focused AI platform (example: SciSpace / “SPACE”)
- Go to the platform and choose “Literature review” or equivalent.
- Enter your research problem / keywords (start broad if you’re exploratory).
- Use semantic search features to surface top papers, clusters and subtopics.
- Filter results by availability (PDF present / open access), citation counts, dates, or other criteria.
- Save relevant items to your library/notebook on the platform (centralize PDFs in one place).
- If a PDF is missing, use the platform’s “request author” feature or connect your institution’s library (Lib/login).
- Generate summary tables automatically (customize columns such as methods, limitations, contributions, citation count).
- Ask the platform to explain specific passages, tables, or equations by selecting text in the PDF — use that to clarify difficult sections.
- Use agent/task features for multi-step jobs: instruct the agent to search multiple databases, compile a spreadsheet, and produce a comparative summary — the task will produce a reproducible log of steps.
- Supplement reading with platform-generated audio or podcasts if helpful, but always verify content against full-text reading.
- Export artifacts (tables, compiled PDFs, task logs) for recordkeeping and reproducibility.
- Use AI tools to format the manuscript to target journal standards (but not to generate the manuscript’s intellectual content).
- Keep track of credits/trial usage on the platform; avoid “random prompting” to conserve resources.
Good practices and verification
- Never rely solely on abstracts or AI summaries — read the full article to form your own interpretation.
- Cross-check AI outputs, especially empirical claims, results, and comparisons; verify against original sources.
- Maintain an organized folder structure and a single consolidated reference library to avoid losing citations.
- Build a summary table that includes: citation, year, methods, main findings, limitations, contribution to your question.
- Prefer primary (seminal and recent) works for state-of-the-art mapping; sort by age and citations to identify seminal vs. contemporary articles.
- Keep reproducibility in mind: record your search queries, databases used, and filtering criteria (tasks in the platform can help).
- Declare AI use in manuscripts according to journal and institutional guidelines. Be precise about what AI did (e.g., “used for literature aggregation and formatting” vs. “used to write sections”).
- Treat AI as a brilliant but inexperienced intern: it can find patterns fast but cannot validate or take responsibility.
Treat AI as a brilliant but inexperienced intern: it can find patterns fast but cannot validate or take responsibility.
Warnings and limitations
- AI can hallucinate or produce incorrect syntheses; models trained on generic or non-comparative data can be especially unreliable for claims requiring comparative baselines.
- Regulatory and editorial bodies (e.g., CNPq in Brazil, journals) emphasize transparency and integrity — follow their guidelines.
- Over-reliance on AI summaries can reduce cognitive engagement and impede learning and true knowledge generation.
Operational tips demonstrated in the talk
- Use the platform’s ability to create custom columns/filters and ranking (top-cited, year, open access).
- Use the select-and-explain feature to have the AI clarify tables/equations inline while reading PDFs.
- Use tasks to obtain a stepwise reproducible literature review and export tables/files at the end.
- Integrate SciSpace with ChatGPT (via apps) to leverage both systems when drafting queries — but be mindful of which system produced which output and keep reproducibility logs.
- Keep credits in mind on subscription platforms; plan searches and tasks to use credits efficiently.
Takeaway lessons
- AI is a multiplier for productivity in research but is not a substitute for the researcher’s intellectual labor and responsibility.
- The researcher must lead: pose the question, interpret results, ensure ethical conduct, and claim authorship.
- Use AI as an assistant for grunt work (searching, indexing, summarizing, formatting) and for learning aids, but verify everything and maintain rigorous habits (reading, organizing, documenting).
- Transparency and reproducibility are non-negotiable in the AI-augmented research process.
Speakers and sources referenced
- Main speaker: Prof. Ricardo Limongi
- Platforms and tools: SciSpace (referred to as SPACE / SPAC / Sace), ChatGPT / GPT chat, Google Scholar
- Technology reference: Google team — transformer model (2017)
- Biographer: Walter Isaacson (referred to as “Walterxon” in transcript)
- Industry figure: Bob Iger (referred to as “Iger” regarding value of questions)
- Institutions/regulatory bodies: CNPq (Brazil)
- Other references: unspecified British journal about pitfalls of using generic data; mentions of academic journals, publishers and editorial processes
- Audience/participants: “Ivan” (audience question referenced)
Category
Educational
Share this summary
Is the summary off?
If you think the summary is inaccurate, you can reprocess it with the latest model.
Preparing reprocess...