You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,6 +3,7 @@
3
3
optillm is an OpenAI API compatible optimizing inference proxy which implements several state-of-the-art techniques that can improve the accuracy and performance of LLMs. The current focus is on implementing techniques that improve reasoning over coding, logical and mathematical queries. It is possible to beat the frontier models using these techniques across diverse tasks by doing additional compute at inference time.
4
4
5
5
[](https://huggingface.co/spaces/codelion/optillm)
6
+
[](https://colab.research.google.com/drive/1SpuUb8d9xAoTh32M-9wJsB50AOH54EaH?usp=sharing)
6
7
7
8
## Patchwork with optillm
8
9
@@ -142,6 +143,7 @@ or your own code where you want to use the results from optillm. You can use it
142
143
| PlanSearch |`plansearch`| Implements a search algorithm over candidate plans for solving a problem in natural language |
143
144
| LEAP |`leap`| Learns task-specific principles from few shot examples |
144
145
| ReRead |`re2`| Implements rereading to improve reasoning by processing queries twice |
146
+
| CoT Decoding | N/A for proxy | Implements chain-of-thought decoding to elicit reasoning without explicit prompting |
0 commit comments