Best Shaders for PS1 Games: How to Choose, Configure, and Understand What They Actually Do

29 April 2026 17 min read Mark Baxman

You fire up your old PlayStation 1 game on your modern PC using an emulator, and suddenly you’re confronted with a menu full of shader options: bsnes, GLSL, Slang, CRT, smoothing, upscaling, motion blur—the list goes on. You select something that sounds like it should make things look “better,” and the game either looks stunning or completely wrong. Maybe the colors are washed out. Maybe the scanlines are so thick they destroy the image. Maybe the whole thing looks like a blurry mess.

Here’s the real problem: most gaming communities and online guides treat shaders like aesthetic tweaks—like changing the color of your car. They’re not. Shaders are mathematical transformations of pixel data running in real-time on your GPU. They have specific engineering purposes, measurable output characteristics, and real trade-offs. Using the wrong shader, or using the right shader with wrong settings, doesn’t just make a game “look weird.” It can actually obscure details the original developers intended, add artifacts that don’t belong, or consume so much GPU overhead that your frame rate collapses.

After 25 years of working with digital systems and graphics hardware, I’ve watched countless people frustrate themselves trying shaders randomly until something looked passable. That’s not engineering. That’s guessing.

## What you’ll learn in this guide

This article explains how shaders actually work at the hardware level, what each major category of PS1 shader does to your image, and more importantly, *why* you’d choose one over another based on your setup, your monitor, and what you’re actually trying to achieve with the original game’s aesthetic.

You’ll understand the measurable differences between upscaling, filtering, scanline rendering, and color correction—not as marketing terms, but as distinct pixel operations with specific engineering trade-offs. By the end, you’ll have a decision framework to configure shaders intelligently instead of randomly testing combinations until something looks right by accident.

## How PS1 rendering actually worked (and why shaders matter)

The original PlayStation 1 output video at a fixed resolution—typically 320×240 or 640×480 depending on the game—with specific characteristics: dithering patterns to approximate color depth, 2D and 3D geometry rendered with affine texture mapping (not perspective-corrected), and output through analog video connectors to CRT monitors.

Modern LCD and OLED displays don’t work like CRT monitors. They don’t have scanning electron beams, they don’t have bloom or geometric distortion, and they display at fixed pixel-grid resolutions much higher than what the PS1 produced. When you display original PS1 output directly on a modern display without any shader intervention, you get one of two problems: either the image is tiny (because you’re showing 320×240 pixels on a 1440p monitor), or it’s stretched and blurry (because the display is trying to upscale non-integer pixel dimensions).

This is where shaders enter the picture. A shader is a small program that runs on your GPU, taking the original pixel data as input and producing transformed pixel data as output. That transformation happens in the rendering pipeline itself—not as a post-process applied to a screenshot, but as part of the actual graphics computation.

The fundamental shader operations for PS1 emulation fall into several distinct categories, and understanding the difference between them is the key to choosing correctly.

## Upscaling shaders: Making small images usable on modern displays

The most basic shader operation is **upscaling**—taking a 320×240 image and enlarging it to fill your monitor. The PS1 emulator needs to do this no matter what; the question is *how*.

The naive approach is **nearest-neighbor scaling**. This takes each original pixel and blows it up to multiple pixels on the output display. If your original image is 320×240 and your target is 1440×1080, a 4×4 upscale, every original pixel becomes a 4×4 block. This preserves the original pixel boundaries perfectly and adds no blur. But it produces an obviously blocky image with hard, jagged edges on any diagonal line or curved surface.

The opposite extreme is **bilinear filtering**. This treats the original low-resolution image as a continuous function and samples it at the higher resolution. It’s smooth, but it also blurs the image significantly. Fine details—especially thin lines and text—become soft and indistinct. Bilinear filtering is what actual PS1 hardware did when displaying to analog CRT monitors at moderate distances; the electron beam naturally had some width, creating a similar blur effect. But on a modern LCD sitting 2 feet from your face, bilinear filtering often looks worse than nearest-neighbor because the blur is visible at screen-normal viewing distances.

The middle ground is **edge-preserving upscaling**. This is where most modern PS1 shaders live. These algorithms attempt to enlarge the image while keeping edges sharp. The most common approach is **hq2x, hq3x, or hq4x**, which examine the surrounding pixels of each point being upscaled and use logic to determine whether an edge exists. If it does, the algorithm preserves the sharpness of that edge. If it doesn’t, it blends smoothly.

There are dozens of variations: **xBR** (XBR stands for “scale by rule”) uses a more sophisticated edge-detection algorithm that considers diagonal relationships. **Cubic filtering** fits a smooth curve through pixel values, producing sharper upscaling than bilinear at the cost of potential ringing artifacts (slight halos around edges). **Lanczos** filtering uses a sinc-based kernel for high-quality upscaling but requires more computation.

The measurable difference matters. Using nearest-neighbor on a PS1 game at 4×4 upscale produces a visibly blocky image. Using hq3x at the same scale produces noticeably sharper detail recovery while remaining reasonably sharp. Using bilinear produces a soft, blurry result. These aren’t opinions; they’re measurable differences in pixel values.

Which should you use? It depends on your display distance and pixel density. If you’re playing on a 24-inch 1080p monitor at normal arm’s length (about 24 inches), your eye resolves roughly 92 pixels per inch. At that distance, you’ll see the blockiness of nearest-neighbor scaling. An edge-preserving algorithm like hq3x or xBR will look sharper and more natural. If you’re projecting onto a large screen at distance, nearest-neighbor might actually look better because the blocks are far enough away that your eye doesn’t resolve them as individual blocks—they blend into the image naturally.

## CRT emulation: Recreating the monitor you’re not using

The second major category of PS1 shaders is **CRT emulation**. This is where things get genuinely interesting from an engineering perspective, and where most people make their worst decisions.

The original PS1 output analog video to CRT monitors (or arcade cabinets). CRT displays have fundamental optical characteristics:

**Scanlines**: CRT monitors used an electron beam that swept across the screen horizontally, line by line. The beam had a specific width—thin enough to produce sharp horizontal detail, but with real physical width. This created visible horizontal lines across the image, especially on solid colors. These weren’t noise or imperfection; they were the fundamental operating mechanism of the display technology.

**Bloom and halation**: When an electron beam hits the phosphor coating inside a CRT tube, the phosphor glows, but the light doesn’t stay perfectly confined to that exact spot. There’s diffusion—light spreads slightly to neighboring areas. Bright pixels appear slightly larger and softer than they should be. On older, cheaper CRT tubes, this effect was more pronounced.

**Geometric distortion**: CRT tubes have barrel distortion—straight lines at the edges of the screen curve slightly. This is caused by the electron beam’s deflection coils not being perfectly linear across the entire screen. At the center, distortion is minimal. At the edges, it’s visible.

**Color fringing**: CRT displays use three electron guns (one for red, one for green, one for blue) that must be precisely aligned. They’re never perfectly aligned. At high zoom levels, you can see subtle color fringing on edges, especially at the screen corners.

A proper CRT emulation shader simulates some or all of these characteristics. The simplest approach is **scanline rendering**—adding horizontal dark lines across the output to simulate the scanline pattern. More sophisticated shaders add bloom, geometric distortion, and color convergence simulation.

Here’s where people make mistakes: **CRT emulation is not automatically better.** In fact, for many players and many games, it’s actively counterproductive.

The reason: original PS1 developers knew they were working with CRT output. They compensated for it. They used dithering patterns to approximate colors because CRT phosphors couldn’t display every possible color precisely. They used specific art direction that took scanlines into account. When you add CRT emulation to a PS1 game on a modern LCD display, you’re approximating technology that the original hardware already had. The result can be beautiful—it approximates what the game looked like on original hardware. But it also adds visible artifacts (scanlines, bloom) that reduce sharpness and visual clarity compared to a non-CRT-emulated, properly upscaled image.

**The key trade-off:** CRT emulation trades sharpness and clarity for authenticity. If you’re playing a game and you want to experience it as closely as possible to how it appeared on original hardware, CRT emulation is the right choice. If you want maximum visual clarity and detail on modern display technology, you should disable CRT effects.

The specific implementation matters enormously. A poorly configured CRT shader with scanlines set to 75% opacity can destroy image clarity while providing almost no authentic CRT appearance. A well-tuned shader with scanlines at 20-30% opacity, combined with gentle bloom, can be nearly invisible at normal viewing distances while providing subtle visual cues that approximate the original experience.

## Filtering shaders: Texture mapping quality and affine vs. perspective

This is the category where most PS1-specific technical knowledge becomes relevant.

The original PS1 used **affine texture mapping**, which is faster than perspective-correct mapping but produces visible distortion on polygons that are far away or at steep angles. When a textured polygon recedes into the distance, the texture appears to stretch and warp in a characteristic way. This is one of the most recognizable “PS1 look” characteristics.

Some shaders can’t fix this, and you shouldn’t want them to—affine mapping is part of the PS1’s visual identity. But texture filtering shaders can improve how textures are sampled at different distances and angles.

The PS1 hardware used **nearest-neighbor texture sampling** by default. This means that when a texture is being drawn larger than its source resolution, individual texels (texture pixels) are simply repeated. On low-resolution PS1 textures, this creates a blocky, pixelated appearance that many players find charming but others find ugly.

**Bilinear texture filtering** (sometimes called “smooth” filtering in emulator options) interpolates between adjacent texels. It’s smoother and less blocky but also blurs fine texture detail. This is a genuinely useful option for many games—prerendered background art, for example, often looks better with bilinear filtering because the source art is higher resolution than the PS1 could have produced.

More sophisticated shaders like **trilinear filtering** and **anisotropic filtering** improve texture quality further, but at the cost of GPU overhead. Trilinear filtering interpolates between mipmap levels (pre-generated lower-resolution versions of textures) to avoid harsh transitions as polygons recede. Anisotropic filtering improves texture quality on surfaces that are at steep angles to the camera.

For PS1 emulation specifically, **the most common mistake is enabling anisotropic filtering too aggressively.** The PS1 never had anisotropic filtering. When you enable it, you’re making the game look different from how it appeared on original hardware. That’s not inherently wrong—many games do look better with it—but it’s changing the original aesthetic.

## Upsampling: Internal resolution versus output resolution

Many PS1 emulators allow you to increase the **internal rendering resolution**—the resolution at which the 3D graphics are actually rendered before being output to your display. This is different from upscaling shaders.

If you set internal rendering to 2× the original resolution (so 640×480 instead of 320×240) and then use nearest-neighbor upscaling to your display, you get sharper geometry and higher-quality texture sampling without the blur of bilinear filtering. The trade-off is significant GPU overhead; rendering at 2× resolution requires roughly 4× the processing power.

This is genuinely useful for PS1 emulation because it preserves the PS1’s visual characteristics while increasing detail fidelity. Edges remain sharp. Textures remain sharp. Dithering patterns become more prominent (which is actually good—you see more of the original art). But it requires a modern GPU to run smoothly; you can’t do 4× internal resolution on a modest laptop GPU and expect 60fps.

## Practical configuration: What to actually enable

Let me give you a concrete decision framework based on your setup and preferences.

**If you’re using a 1080p or 1440p display at normal viewing distance (24 inches or more) and you want visual clarity:**

1. Enable **internal resolution upscaling** to 2× if your GPU can handle it (most modern GPUs can). If not, stick with 1× but use an edge-preserving upscaling shader like **hq3x** or **xBR**.
2. Disable CRT emulation entirely, or enable it at very low intensity (scanlines at 15-20% opacity).
3. Use **nearest-neighbor texture filtering** if you want authentic PS1 appearance, or **bilinear** if you prefer smoother textures on prerendered art.
4. Leave **anisotropic filtering** at 1× (disabled). If you enable it, keep it to 2× maximum.

**If you’re using a large display (40+ inches) at distance, or projecting, and you want an authentic approximation:**

1. Use **2× or 4× upscaling** (depending on GPU capacity and target resolution).
2. Enable **moderate CRT emulation**—scanlines at 25-35% opacity, gentle bloom enabled.
3. Use **nearest-neighbor texture filtering**.
4. Disable **anisotropic filtering**.

**If you want maximum authenticity to original hardware:**

1. Use **nearest-neighbor upscaling only** (which means 1× internal resolution, then integer upscaling to your display—4×, 5×, or 6× depending on what divides evenly).
2. Enable **full CRT emulation** with scanlines, bloom, and geometric distortion.
3. Use **nearest-neighbor texture filtering**.
4. Enable **dithering** (many emulators allow you to disable it; don’t).

The critical insight: **there is no single “best” configuration.** There are only configurations that match your specific display hardware, viewing distance, and aesthetic preference. Choosing intelligently means understanding what each shader does and why, not just copying what someone on a forum said looked good on their system.

## Specific shader implementations across emulators

The major PS1 emulators—PCSX2, DuckStation, RetroArch with Mednafen—support different shader systems and have different built-in shaders.

**PCSX2** (the most feature-complete PS1/PS2 emulator) uses GLSL and custom shaders. The native upscaling is performed by the emulator itself before shader application. PCSX2’s most useful shaders are the CRT and bloom variants. Its texture filtering options are built into the core emulation, not shader-based.

**DuckStation** (focused on accuracy and simplicity) has straightforward options: internal resolution scaling (performed by the emulator core), texture filtering options, and a small selection of CRT shaders. It’s easier to configure correctly because there are fewer options to get wrong.

**RetroArch with Mednafen** (the most flexible option) supports the Slang shader system, which includes hundreds of community-created shaders. This flexibility is also a liability—you can spend hours browsing shader combinations instead of actually playing games. The most respected PS1 shaders in the Slang system are the **CRT-Royale** family (which provide accurate CRT simulation with customizable parameters), **xBR** variants for upscaling, and **mGBA-style** filtering chains.

None of these is objectively better. PCSX2 is best if you want comprehensive PS1/PS2 emulation with solid default configuration. DuckStation is best if you want simplicity and speed. RetroArch is best if you want granular control and don’t mind spending time on configuration.

## Understanding dithering: Don’t disable it

I need to address this specifically because it’s a common mistake. The PS1 used aggressive dithering to approximate more colors than its 24-bit color space would allow from limited VRAM. Dithering—the deliberate placement of slightly different color values in a spatial pattern—is visible on PS1 games at native resolution. It looks like noise.

Many emulator users disable dithering because they think it makes games look “cleaner.” This is wrong. Dithering is part of the PS1’s visual identity and artistic intent. When you disable it, you’re not revealing the “true” image; you’re creating a false version that never existed on original hardware.

If dithering bothers you, the correct response is to increase internal resolution upscaling, which makes the dithering pattern finer and less visible. Don’t disable it outright.

## GPU overhead and real-world performance

This matters more than most guides acknowledge. A fancy shader setup with 4× internal rendering, advanced CRT effects, and high-quality filtering might look beautiful, but if it reduces your frame rate from 60fps to 45fps, you’ve degraded the playing experience. The PS1 was designed to run at 60fps (in most regions) or 50fps (PAL regions). Frame rate consistency is more important than visual perfection.

Before committing to a shader configuration, actually play a game with it for 15 minutes. Not just look at a screenshot, not just run a benchmark, but play. Does it feel responsive? Can you maintain consistent 60fps through action scenes? If not, dial back your settings.

For reference, here’s what you can expect on modern hardware:

– **Native resolution (no upscaling), minimal shaders**: 60fps on integrated graphics from 2015+
– **2× internal resolution, moderate CRT shader**: 60fps on integrated graphics from 2018+, or any discrete GPU
– **4× internal resolution, advanced CRT shader**: 60fps only on discrete GPUs (GTX 1060 or better, or equivalent AMD)

## Advanced consideration: Shader chains and custom configurations

RetroArch allows you to combine multiple shaders in sequence. For example, you might use an upscaling shader (xBR), then a CRT shader, then a color correction shader. This gives you enormous flexibility, but also enormous opportunity to create visually awful results through bad combinations.

If you’re not experienced with shader chains, don’t use them. Use a single, well-designed shader designed for PS1. The community-recommended setups (like CRT-Royale combined with a good upscaler) have already been tested and tuned by people who know what they’re doing.

If you do want to experiment, start with a single shader, understand what it does to your image, then add a second shader and observe the change. Don’t load five shaders at once and wonder why everything looks wrong.

## The honest truth about shader choice

After all this technical explanation, here’s what you need to understand: **the “best” shader is the one that matches your specific preferences and hardware constraints, not the one that internet consensus says is best.**

Your monitor’s characteristics, your viewing distance, your eyesight, and what you want from the playing experience all matter. A shader configuration that looks perfect on a 55-inch TV at 10 feet will look wrong on a 27-inch monitor at 2 feet. A shader that preserves original authenticity might be annoying to someone who just wants a pleasant modern experience.

The only way to know what works for you is to test it yourself. Load a game you know well. Try a shader. Play for a few minutes. Notice whether you’re enjoying the game or distracted by the graphics. Change a setting. Try again. This iterative process is how you find your personal optimum, not by following a guide.

That said, I’d recommend starting with **2× internal resolution upscaling and no CRT effects**. This improves visual clarity without changing the fundamental aesthetic. Play for 20 minutes. If you want it to look more authentic, enable moderate CRT effects (scanlines at 25% opacity, gentle bloom). If you want more sharpness, enable an edge-preserving upscaling shader. If you want higher fidelity, increase internal resolution to 3× or 4×.

This approach—starting with a moderate baseline and adjusting from there—beats random experimentation and generally produces good results for most players on most systems.

Have questions about shader configuration for your specific setup? Share details about your display resolution, GPU, and what games you’re playing in the comments, and I can offer more specific recommendations.

Your email address will not be published. Required fields are marked *