How to Export Millions of Records in Laravel Without Timeouts | Queue-Based Scalable CSV Exports

Learn how to export millions of rows in Laravel without timeouts or memory problems. Build a scalable, queue-driven CSV export system with chunked streaming, background processing, and real-time progress tracking.

Muhammad Ishaq
Muhammad Ishaq
07 Oct 2025
4-5 minute read
How to Export Millions of Records in Laravel Without Timeouts | Queue-Based Scalable CSV Exports

Introduction

Exporting large datasets in Laravel, such as product histories, financial data, or analytics logs, often causes slow performance, memory issues, and browser timeouts.

When your application needs to handle millions of records, traditional export methods like return Excel::download() or direct controller responses cannot handle the load efficiently.

The best solution is a queue-based export pipeline. This method processes data in the background, writes it to disk in chunks, and allows users to download the final file when it is ready, without putting pressure on your server.

In this guide, you will learn how to build a high-performance CSV export system in Laravel that:

  • Handles millions of records efficiently

  • Uses queues and chunked queries for stability

  • Allows users to choose export columns

  • Provides real-time progress tracking

  • Works entirely with native Laravel (no third-party packages)


Step 1: Create the export_histories Table

This table stores export job information, progress, and user-specific details.

Schema::create('export_histories', function (Blueprint $table) {

    $table->id();

    $table->foreignId('user_id')->constrained();

    $table->string('name');

    $table->string('status')->default('pending'); // pending, processing, completed, failed

    $table->json('columns')->nullable();

    $table->string('path')->nullable();

    $table->integer('progress')->default(0);

    $table->timestamps();

});

This schema keeps all export jobs organized, auditable, and easy to manage in dashboards or admin panels.


Step 2: Controller for Creating Export Requests

The controller validates the selected columns, saves the export request, and dispatches a queued job.

class ProductPriceHistoryExportController extends Controller
{

    public function store(Request $request)
    {

        $validated = $request->validate([
            'columns' => 'required|array|min:1',
            'columns.*' => 'in:product_code,product_name,old_price,new_price,changed_at,user_id',
            'name' => 'nullable|string|max:255',
        ]);

 

        $export = ExportHistory::create([
            'user_id' => $request->user()->id,
            'name' => $validated['name'] ?? 'product-price-history-export',
            'columns' => $validated['columns'],
        ]);

        GenerateProductPriceHistoryExport::dispatch($export->id)->onQueue('exports');

        return response()->json([
            'export_id' => $export->id,
            'status' => $export->status,
        ], 202);

    }

}

Benefit: The process runs in the background, allowing the user to continue using the application while the export completes.


Step 3: Queued Job for Chunked CSV Export

This queued job processes data in small batches, writes directly to a CSV file, and avoids memory overload.

class GenerateProductPriceHistoryExport implements ShouldQueue
{

    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    public $timeout = 21600; // 6 hours
    public $tries = 3;

    public function __construct(public int $exportId) {}

    public function handle()
    {
    
        $export = ExportHistory::findOrFail($this->exportId);

        $export->update(['status' => 'processing']);

        $columns = $export->columns ?? ['product_code', 'product_name', 'old_price', 'new_price', 'changed_at'];

        $tmpPath = storage_path("app/exports/tmp_{$export->id}.csv");

        $handle = fopen($tmpPath, 'w');

        fputcsv($handle, $columns);

        $query = DB::table('product_price_histories')->select($columns);

        $total = $query->count();

        $processed = 0;

        $chunkSize = 2000;

        $query->orderBy('id')->chunkById($chunkSize, function ($rows) use ($handle, &$processed, $total, $export, $columns) {

            foreach ($rows as $row) {

                fputcsv($handle, collect($row)->only($columns)->toArray());
                // you may create a custom function to formate or load the realtions if data is complex

                $processed++;

            }

            $progress = intval(($processed / max($total, 1)) * 100);

            $export->update(['progress' => $progress]);

        });

        fclose($handle);

        $finalPath = "exports/{$export->id}/product-price-history.csv";

        Storage::disk('public')->putFileAs(

            "exports/{$export->id}",

            new \Illuminate\Http\File($tmpPath),

            'product-price-history.csv'

        );

        $export->update([

            'status' => 'completed',

            'path' => $finalPath,

            'progress' => 100,

        ]);

        $export->user->notify(new ExportReadyNotification($export));

    }

 

    public function failed(Throwable $e)
    {
        if ($export = ExportHistory::find($this->exportId)) {
            $export->update(['status' => 'failed']);
        }
    }

}

Why It Works:
The job processes each chunk independently, keeping memory usage consistent even with millions of rows.


Step 4: Frontend Polling and Download Flow

After the export starts, the frontend can poll for progress updates using an API endpoint:

{ 

  "export_id": 42,
  "status": "processing",
  "progress": 65

}

When the export is complete, the file can be downloaded using:

Storage::disk('public')->url($export->path);

This gives users a smooth experience: they can trigger the export, continue their work, and download the file later when it is ready.


Step 5: Optional Automatic Cleanup

To prevent old files from piling up, schedule a cleanup task.

protected function schedule(Schedule $schedule)
{

    $schedule->call(function () {
        ExportHistory::where('created_at', '<', now()->subDays(7))
            ->each(function ($export) {
                Storage::disk('public')->deleteDirectory("exports/{$export->id}");
                $export->delete();
            });
    })->daily();

}

Key Advantages of This Approach

  • Scalable: Easily handles millions of records without affecting performance.

  • Memory-safe: Streams data directly to disk instead of loading everything in memory.

  • User-friendly: Uses background jobs with progress tracking for a smooth experience.

  • Flexible: Allows users to choose which columns to export.

  • Reliable: Automatically retries failed jobs to ensure successful exports.

  • Native Laravel: Built entirely with Laravel’s core features, no third-party packages needed.


Conclusion

Traditional Laravel export methods cannot handle very large datasets efficiently.
Using a queue-based, chunked CSV export system ensures:

  • No timeouts or crashes

  • Constant memory usage

  • Background job processing

  • Reusable and modular code

This approach is ideal for exporting large data such as reports, analytics, and product price histories in Laravel.


? Frequently Asked Questions

Q1: Why not use Maatwebsite/Excel?
It loads data into memory, which is not suitable for millions of rows. The CSV streaming method is much more efficient.

Q2: Can this be used for Excel (XLSX)?
Yes, but CSV is faster and more memory-efficient. For Excel, use a streaming writer library.

Q3: How can users be notified when exports are ready?
Use Laravel Notifications to send an email or in-app alert once the export is completed.

Q4: What happens if the job fails?
Laravel automatically retries based on the $tries value. If all retries fail, the export is marked as failed.

Q5: Can I track progress in real-time?
Yes, poll the progress value using an API to show a progress bar in your frontend.

Q6: How large can the export be?
Very large. Since the data is streamed to disk, it can handle hundreds of megabytes or millions of rows without issues.