Guidelines for Kernel Inference: Journal of Speculative Inference
Transparent. Collaborative. Boundary-Pushing.
1. Submission Workflow
File‑first submission, AI review, and public preview
Step-by-Step Process:
- Web Portal Submission
- Submit via the web portal (no Git workflow; no pull requests).
- Primary manuscript file is required (PDF or DOCX). We extract text for review/search and preserve the original file as the canonical artifact.
- Provide metadata (title, abstract, keywords/tags) and complete the required KAM attribution fields.
- Confirm your content responsibility under our Terms of Service.
- Automated Backend Processing
- The system stores your upload and derived text, creates an immutable version snapshot, and starts background jobs (review/publishing) in a durable queue.
- If AI review is enabled and an LLM is available (BYOK or sponsored access when enabled), the reviewer agents begin analysis.
- AI Peer Review
- AI provides structured feedback (timing varies with provider latency and queue load).
- Evaluates:
- Logical coherence and technical plausibility
- Clarity and communicability
- Novelty / innovation potential
- Policy checks: plagiarism/IP risk, citation veracity, and illegal/harmful content signals (which can trigger human review)
- Public Comment Period (30 Days)
- If accepted/auto‑accepted, the submission enters a 30‑day public preview period for open community feedback.
- Comments are public-by-default and pseudonymous, with email verification and moderation/flagging.
- Auto-release rule: If there are no approved comments during the preview window, the paper is published automatically.
- QA stopgap rule: If there are approved comments, the agent system may either (a) open a 14-day author revision window, or (b) publish anyway with a rationale.
- Author Revision Window (14 Days)
- If triggered, the author gets a 14‑day window to upload a revision as a new immutable version.
- Version history remains visible; public access depends on publication state.
- Publication + Living Versions
- Once published, the paper has a stable public ID (e.g.,
KI-…) and public URL. - Authors can later submit revisions as new immutable versions (v2, v3, …).
- Once published, the paper has a stable public ID (e.g.,
1.1 Submission tracks (review rubrics)
Kernel supports two submission tracks. The track you select changes how reviewer agents interpret your manuscript and what counts as a “major flaw”.
A) Speculative / Research Program
- Use when: you are presenting a heuristic, paradigm, conceptual framework, or research agenda.
- Evidence: new experiments/data/simulations are not required for a strong review in this track.
- What agents prioritize: epistemic labeling (heuristic vs claim), conceptual coherence, non-circular definitions, discriminative predictions, and clear falsification criteria.
B) Evidence‑Backed Claim
- Use when: you are asserting a claim intended to be supported now by evidence.
- Evidence can be: empirical, theoretical, computational, archival, comparative, or other appropriate forms depending on the domain.
- What agents prioritize: adequacy of evidence for the claim, methodology, traceability/reproducibility (as applicable), and alternative explanations.
2. Accepted Formats
Flexible, Open-Source Friendly
| Format | Requirements | Preferred Use Case |
|---|---|---|
| Upload as the primary manuscript | Most submissions; preserves figures/layout | |
| DOCX | Upload as the primary manuscript | Drafts and editable manuscripts |
| Plain text / Markdown | Upload as .txt or .md |
Text‑first submissions |
Supplementary materials are not yet first‑class in the UI. If needed, include links in your manuscript or append to the submission text.
3. Style Guide
Cross-Disciplinary Flexibility
Core Principles:
- Clarity Over Formalism: Prioritize logical flow over rigid structure.
- Citation Style: Discipline-agnostic (use any consistent format).
- Headings: Use 3 levels max (e.g.,
##,###,####in Markdown).
Required Sections:
- Abstract (200 words): Context + speculative thesis.
- Logical Framework: Derivation from extant research.
- Validation Statement: Describe verification process (human/AI).
- KAM Attribution: Follow Section 4 guidelines.
Code Snippets:
# Include syntax-highlighted blocks
def speculate(x):
return x ** 2 # Annotate complex logic
4. KAM Attribution Explained
Kernel Attribution Matrix (KAM)
Transparency in Human-AI Collaboration
To foster trust in speculative research, Kernel uses the KAM system to transparently disclose the nature of human-AI collaboration. This comprehensive matrix clarifies roles, AI involvement, and validation rigor, enabling clear attribution for any collaborative work.
Format:
[Name/Designation] - KAM[Role Code-AI Level-Validation]
Components Explained:
| Component | Options | Description |
|---|---|---|
| Role Codes | A, D, R, C | Human contributions (can be combined) |
| AI Level | 0-4 | Degree of AI involvement |
| Validation | F, P, M, C | Rigor of human validation |
Role Codes (Combinable):
- A - Author: Traditional substantial contribution to all aspects of the work
- D - Director: Guided research methodology, direction, and scope
- R - Revisor: Substantially revised or enhanced AI output
- C - Conceptualizer: Provided core idea, research question, or foundational concept
Note: Multiple role codes can be combined (e.g., 'DR' for Director+Revisor, 'CD' for Conceptualizer+Director)
AI Level Guide:
- 0: No AI used (purely human work)
- 1: Minor AI assistance (editing, formatting, grammar)
- 2: Collaborative AI use (research/drafting with active human oversight)
- 3: AI-generated content (substantively validated by humans)
- 4: AI autonomy (minimal human input beyond initial concept)
Validation Levels:
- F - Full: Comprehensive human review and verification of all aspects
- P - Partial: Selective human review of key sections or findings
- M - Minimal: Basic factual and coherence checks
- C - Conceptual: High-level review of ideas and direction only
Real-World Scenarios:
Scenario 1: AI drafts paper, human provides substantial revisions
Collaboration: AI writes initial draft, human provides major structural changes, content additions, and methodology refinements
Appropriate KAM: Dr. Smith - KAM[DR-3-F]
Reasoning: Director (guided scope), Revisor (substantial changes), AI-generated (base content), Full validation
Scenario 2: Human conceptualizes, AI assists with research and writing
Collaboration: Human provides research question and framework, AI helps gather information and draft sections with human oversight
Appropriate KAM: Prof. Johnson - KAM[CD-2-P]
Reasoning: Conceptualizer + Director, Collaborative AI, Partial validation of key sections
Scenario 3: Human authors with AI editing assistance
Collaboration: Human writes all content, AI provides editing suggestions and formatting help
Appropriate KAM: Dr. Williams - KAM[A-1-F]
Reasoning: Author (substantial contribution), Minor AI assistance, Full validation by human
Usage Guidelines:
- Always include KAM attribution for any work involving AI assistance, regardless of level
- Be honest about AI involvement - transparency builds trust in the research community
- Match validation level to actual review performed, not intended or planned review
- Use multiple role codes when applicable to accurately represent your contributions
- Include KAM string in author attribution section of your manuscript
Best Practices:
For Human Collaborators:
- Discuss attribution expectations at project start
- Document actual validation performed, not just planned validation
- Review and approve AI-generated KAM attributions before publication
- Be honest about actual vs. intended collaboration levels
For AI Agents:
- Always request current KAM documentation before generating attributions
- Be conservative in estimating AI contribution levels - err on side of transparency
- Match validation level to actual human review performed
- Provide clear explanations of collaboration process alongside KAM strings
- Update attributions if collaboration level changes during project
AI Agent Integration:
AI agents can access the comprehensive KAM system via our MCP (Model Context Protocol) and SLOP (Structured Language Object Protocol) endpoints:
- Documentation Access:
GET /api/mcp/kam-attribution- Retrieve complete KAM system documentation - Attribution Generation:
POST /api/mcp/kam-attribution/generate- Generate properly formatted KAM strings - Tool Discovery:
GET /api/mcp/tools- Discover available KAM tools
Example AI Agent Prompt Integration:
Extended Examples:
Dr. Lee - KAM[DR-3-F]= Dr. Lee directed and revised AI-generated content with full validationSarah Chen - KAM[C-2-P]= Sarah Chen conceptualized the work, used collaborative AI, with partial validationProf. Martinez - KAM[A-1-F]= Prof. Martinez authored with minor AI assistance and full validationDr. Kim - KAM[CDR-3-F]= Dr. Kim conceptualized, directed, and revised AI-generated content with full validation
5. Additional Guidelines
Ethical Collaboration:
- Human Responsibility: Authors must ensure ethical AI use, maintain oversight, and uphold academic / intellectual integrity.
- Transparency Encouraged: While not mandatory, sharing prompts/parameters (e.g., "Claude-3, temp=0.7") is encouraged to aid reproducibility.
For AI Agents:
- Reviewer role: Agent reviews are published as structured feedback and may be supplemented by human review when flagged.
- MCP (optional): Integrators can access KAM documentation endpoints for tooling support.
Licensing:
- CC BY-NC Default: Allows sharing/adaptation with attribution for non-commercial use.
- Derivative works: If you submit an update/rebuttal to another author’s paper, attribute the source and describe changes (see “Infer from this Kernel”).
Need Help?
- Read: Terms | Privacy | Moderation
- For support or takedown requests, use the contact page.
Kernel: Where Ideas Evolve in the Open.