Cognition as symbolic computation over mental representations in the brain
Computationalism
The Core Idea
Cognition is computation. The mind is like software running on the brain’s hardware. Thinking is rule-governed manipulation of symbolic representations.
The brain processes information like a computer:
- Input: Sensory data encoded as symbols
- Processing: Logical/mathematical operations on symbols
- Output: Behavior based on computed results
Mental states are computational states. Beliefs, desires, perceptions are symbolic structures operated on by algorithms.
Central Theses
1. Language of Thought (Fodor)
Mental representations have language-like structure:
- Symbols with syntax and semantics
- Compositional (complex thoughts built from simple ones)
- Productive (finite symbols → infinite thoughts)
- Systematic (if you can think “A loves B,” you can think “B loves A”)
Example: Believing “dogs chase cats” involves mental symbols for DOG, CHASE, CAT in a structured representation.
2. Physical Symbol System Hypothesis (Newell & Simon)
A physical symbol system (like a computer or brain) has necessary and sufficient means for intelligent action.
Any system that manipulates symbols according to rules can think.
3. Multiple Realizability
Same computation can run on different hardware (silicon, neurons, anything). What matters is the algorithm, not the physical substrate.
This supports functionalism - mental states are computational states, multiply realizable.
Classical Cognitive Architecture
Information flows through modules:
Perception → Encoding → Central Processing → Output Planning → Action
↓ ↓ ↓ ↓
Symbols Representations Inference/Rules Motor commands
- Modular: Vision, language, reasoning as separate computational systems
- Serial processing: One operation at a time (in classical version)
- Explicit rules: If-then logic, production systems
- Declarative knowledge: Facts stored as propositions
Why This Matters
Legitimates Cognitive Science
If mind is computational, we can study it scientifically:
- Build computational models
- Test predictions
- Reverse-engineer algorithms from behavior
AI Foundation
Computationalism justified GOFAI (Good Old-Fashioned AI):
- Expert systems
- Logic programming
- Symbolic reasoning
Level of Explanation
Cognitive science is autonomous from neuroscience. Study algorithms (computational level) independent of implementation (neural level).
Application to Research
Modeling Approach
- Formalize cognitive processes as algorithms
- Implement in code
- Test: Does model produce human-like behavior?
Examples:
- Production systems for problem-solving
- Semantic networks for memory
- Logic-based models of reasoning
Language Processing
Computationalism supports:
- Grammar as formal rules (Chomsky’s Universal Grammar)
- Parsing as algorithm applying grammatical rules
- Semantics as compositional computation over symbols
Bilingualism: Two lexicons, shared or separate computational systems for syntax?
Theory Structure
Build computational theories:
- What are the symbols?
- What are the operations?
- What’s the algorithm?
- What does it compute?
Critique and Limitations
Symbol Grounding Problem (Searle)
How do symbols get meaning? Computer symbols are just patterns - they don’t mean anything to the computer.
Chinese Room: You can manipulate Chinese symbols by rule without understanding Chinese. Is that real understanding?
Frame Problem
How do you represent all relevant knowledge? Infinite facts potentially matter for any decision. Computationalism seems to require explicitly representing everything.
Embodiment Challenges
Body and environment matter for cognition, but classical computationalism treats them as mere input/output devices. Maybe cognition isn’t brain-bound symbol manipulation.
Connectionism
Neural networks learn patterns without explicit rules or symbols. Maybe cognition is subsymbolic - distributed patterns, not discrete symbols.
Dynamical Systems
Maybe cognition is continuous dynamics, not discrete computation. Real-time coupling with environment, not internal symbol manipulation.
Statistical/Probabilistic Cognition
Humans aren’t logic machines. We use heuristics, probabilities, approximate reasoning. Pure symbolic computation is too brittle.
Contemporary Status
Not rejected, but:
- Weakened as sole explanation
- Hybrid models combine symbolic and subsymbolic
- Predictive processing, embodied cognition challenge classical assumptions
- Still valuable for some domains (reasoning, planning, language)
Where computationalism still works well:
- High-level reasoning
- Explicit problem-solving
- Language syntax
- Deliberate planning
Where it struggles:
- Perception
- Motor control
- Implicit learning
- Context-sensitive cognition
Connection to My Work
This framework shapes:
- When to use computational models: Explicit rule-following (syntax) vs. gradient patterns (semantics)
- Level of analysis: When algorithm-level explanation suffices vs. when implementation matters
- AI comparison: What LLMs/neural nets do vs. classical symbolic AI
- Bilingualism: Are language systems modular computational systems or integrated distributed networks?
Examples:
- Code-switching: Rule-governed computation over two grammars? Or something messier?
- Translation: Symbolic transfer between representations? Or pattern matching?
- Grammatical knowledge: Explicit rules (computationalism) or implicit statistical patterns (connectionism)?
Relation to Other Frameworks
- Functionalism: Computationalism is strongest version of functionalism (mental states = computational states)
- vs. Embodied Cognition: Direct opposition - computation in head vs. body-environment coupling
- vs. Extended Mind: Computation stays in brain vs. extends into world
- Intentionality: Computation might provide account of intentional content (via causal/functional role)
- Predictive Processing: Modern version that keeps some computational ideas but adds prediction/inference
Key Sources
- Newell, A., & Simon, H. (1976). “Computer Science as Empirical Inquiry: Symbols and Search”
- Fodor, J. (1975). The Language of Thought
- Pylyshyn, Z. (1984). Computation and Cognition
- Marr, D. (1982). Vision (computational level of explanation)
- Searle, J. (1980). “Minds, Brains, and Programs” (Chinese Room critique)