Opinion

LLMs Views on Philosophy 2026

​Published on February 10, 2026 4:12 PM GMTI’ve let a few LLMs take David Bourget’s and David Chalmers’ 2020 PhilPapers Survey and made a little dashboard you can use to navigate the data: https://www.lordscottish.com/philsurvey.htmlYou can select multiple models to pool their data and compare it against the views of philosophers.Some notable things:1. LLMs are much more into one-boxing, and the strongest models (Claude opus 4.6 & ChatGPT5.2) 100% one-box.2. Gpt4o and Grok3 are least interested in immortality. Gpt4o rejects immortality in 4 out of 5 runs.3. LLMs are less into moral generalism than philosophers (which might make sense, given how we train them on lots of special cases without clear underlying ethical principle?).4. All llms accept a priori knowledge (100% across all runs!). Which might again make sense given their (arguable) lack of sense data? 5. LLMs lean towards there being little philosophical knowledge (philosophers lean towards a lot).6. LLMs want to revise race categories much more than philosophers (83% vs 32%). Unlike philosophers, they treat race and gender similarly.7. LLMs are really into deflationary realism as their metaontology compared to philosophers (83% vs 28%). BUT they also accept that there is a hard problem of consciousness more than humans (90% vs 62%).8. No model would not switch in the vanilla trolley case & no model would push in footbridge (across all runs).Methodology: 5 runs per model, default temp settings. See the page for an example prompt.Let me know if you have an idea for an additional feature or find any issues.Discuss ​Read More

​Published on February 10, 2026 4:12 PM GMTI’ve let a few LLMs take David Bourget’s and David Chalmers’ 2020 PhilPapers Survey and made a little dashboard you can use to navigate the data: https://www.lordscottish.com/philsurvey.htmlYou can select multiple models to pool their data and compare it against the views of philosophers.Some notable things:1. LLMs are much more into one-boxing, and the strongest models (Claude opus 4.6 & ChatGPT5.2) 100% one-box.2. Gpt4o and Grok3 are least interested in immortality. Gpt4o rejects immortality in 4 out of 5 runs.3. LLMs are less into moral generalism than philosophers (which might make sense, given how we train them on lots of special cases without clear underlying ethical principle?).4. All llms accept a priori knowledge (100% across all runs!). Which might again make sense given their (arguable) lack of sense data? 5. LLMs lean towards there being little philosophical knowledge (philosophers lean towards a lot).6. LLMs want to revise race categories much more than philosophers (83% vs 32%). Unlike philosophers, they treat race and gender similarly.7. LLMs are really into deflationary realism as their metaontology compared to philosophers (83% vs 28%). BUT they also accept that there is a hard problem of consciousness more than humans (90% vs 62%).8. No model would not switch in the vanilla trolley case & no model would push in footbridge (across all runs).Methodology: 5 runs per model, default temp settings. See the page for an example prompt.Let me know if you have an idea for an additional feature or find any issues.Discuss ​Read More

Leave a Reply

Your email address will not be published. Required fields are marked *