Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
MIT License

Copyright (c) 2024 Josh grenon
Copyright (c) 2025 Josh Grenon

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
Expand Down
25 changes: 17 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ let messages = [Message(role: "user", content: "What is the capital of France?")

// Make a chat completion request
do {
let response = try await api.chatCompletion(messages: messages, model: .sonarLarge)
let response = try await api.chatCompletion(messages: messages, model: .sonar)
print(response.choices.first?.message.content ?? "No response")
} catch {
print("Error: \(error)")
Expand All @@ -46,13 +46,15 @@ do {

The framework supports various Perplexity AI models through the `PerplexityModel` enum:

- `.sonarSmallOnline`: "llama-3.1-sonar-small-128k-online"
- `.sonarLargeOnline`: "llama-3.1-sonar-large-128k-online"
- `.sonarHugeOnline`: "llama-3.1-sonar-huge-128k-online"
- `.sonarSmallChat`: "llama-3.1-sonar-small-128k-chat"
- `.sonarLargeChat`: "llama-3.1-sonar-large-128k-chat"
- `.llama8bInstruct`: "llama-3.1-8b-instruct"
- `.llama70bInstruct`: "llama-3.1-70b-instruct"
### Research and Reasoning Models
- `.sonarDeepResearch`: Advanced research model with 128K context length
- `.sonarReasoningPro`: Enhanced reasoning model with 128K context length
- `.sonarReasoning`: Base reasoning model with 128K context length

### General Purpose Models
- `.sonarPro`: Professional model with 200K context length
- `.sonar`: Standard model with 128K context length
- `.r1_1776`: Base model with 128K context length

## Error Handling

Expand All @@ -62,6 +64,13 @@ PerplexityApiSwift defines a `PerplexityError` enum for common errors:
- `.invalidResponse(statusCode:)`: The API returned an invalid response with the given status code
- `.invalidResponseFormat`: The API response could not be decoded

## Upcoming Features

The following features are planned for future releases:

- **Structured Outputs**: Support for receiving structured, typed responses from the API
- **Streaming Response**: Real-time streaming of model responses for improved user experience

## Documentation

For more detailed information about the Perplexity AI API, please refer to the official documentation:
Expand Down
2 changes: 1 addition & 1 deletion Sources/PerplexityApiSwift/PerplexityApiSwift.swift
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ public class PerplexityApiSwift {
self.bearerToken = token
}

public func chatCompletion(messages: [Message], model: PerplexityModel = .sonarLargeOnline) async throws -> PerplexityResponse {
public func chatCompletion(messages: [Message], model: PerplexityModel = .sonar) async throws -> PerplexityResponse {
guard let bearerToken = bearerToken else {
throw PerplexityError.tokenNotSet
}
Expand Down
16 changes: 9 additions & 7 deletions Sources/PerplexityApiSwift/PerplexityModels.swift
Original file line number Diff line number Diff line change
@@ -1,13 +1,15 @@
import Foundation

public enum PerplexityModel: String {
case sonarSmallOnline = "llama-3.1-sonar-small-128k-online"
case sonarLargeOnline = "llama-3.1-sonar-large-128k-online"
case sonarHugeOnline = "llama-3.1-sonar-huge-128k-online"
case sonarSmallChat = "llama-3.1-sonar-small-128k-chat"
case sonarLargeChat = "llama-3.1-sonar-large-128k-chat"
case llama8bInstruct = "llama-3.1-8b-instruct"
case llama70bInstruct = "llama-3.1-70b-instruct"
// Research and Reasoning Models
case sonarDeepResearch = "sonar-deep-research" // 128k context
case sonarReasoningPro = "sonar-reasoning-pro" // 128k context
case sonarReasoning = "sonar-reasoning" // 128k context

// General Purpose Models
case sonarPro = "sonar-pro" // 200k context
case sonar = "sonar" // 128k context
case r1_1776 = "r1-1776" // 128k context
}

// We can keep this enum if it's still useful for your application
Expand Down