Compare commits

...

300 Commits
0.2.0 ... 0.7.1

Author SHA1 Message Date
54c5c3d82b 0.7.1 2013-09-24 21:57:29 +02:00
2a6514eb5d Update to requests 2.0.0
Closes #140.
2013-09-24 21:49:43 +02:00
22c2cc6465 Removed unused import. 2013-09-24 20:30:54 +02:00
2265edf05e Cleanup 2013-09-24 20:15:19 +02:00
87774acf5c Changelog 2013-09-24 20:09:23 +02:00
9d2ac5d8ad 0.7.0 2013-09-24 20:07:48 +02:00
3e4e1c72a4 Merge branch 'master' of github.com:jkbr/httpie 2013-09-24 19:51:06 +02:00
29f6b6a2a9 Improved Content-Disposition parsing for --download mode
Closes #168.
2013-09-24 19:50:37 +02:00
26b2d408e7 Merge pull request #167 from matt-hickford/master
Fix plugins ImportError
2013-09-23 02:13:14 -07:00
b5f180a5ee Fix plugins ImportError described at https://github.com/jkbr/httpie/issues/166#issuecomment-24905910 2013-09-23 09:54:06 +01:00
354aaa94bd Improved .netrc example formatting. 2013-09-22 15:20:50 +02:00
2ad4059f92 Improved .netrc example formatting. 2013-09-22 15:19:59 +02:00
5a6b65ecc6 Added link to httpie-oauth. 2013-09-22 15:10:50 +02:00
2acb303552 Added support for auth plugins. 2013-09-21 23:46:15 +02:00
f7b703b4bf Added --ignore-stdin
Closes #150
2013-08-23 10:57:17 +02:00
00de49f4c3 Cleanup 2013-08-18 00:59:10 +02:00
67496162fa Improved --help output. 2013-08-10 11:56:19 +02:00
8378ad3624 Try to import argparse before adding it to reqs. 2013-08-01 09:07:33 +02:00
f87884dd8d README 2013-08-01 08:46:37 +02:00
b671ee35e7 Merge pull request #153 from lorin/patch-1
Augment cookie example in README for multiple cookies
2013-07-31 07:52:22 -07:00
69247066dc Augment cookie example in README for multiple cookies
This change updates the README to show how to pass multiple cookies.
2013-07-31 10:29:38 -04:00
383dba524a Print error when download is interrupted by server
Close #147
2013-07-07 17:00:03 +02:00
60f09776a5 httpless outputs also response headers by default 2013-06-03 12:28:04 +02:00
48719aa70e README 2013-06-03 12:22:34 +02:00
809a461a26 v0.6.0 2013-06-03 12:19:43 +02:00
c3d550e930 Fixed headers tests; Require requests>=1.2.3. 2013-06-02 20:47:29 +02:00
172df162b3 Added XML formatting to CHANGELOG. 2013-06-02 20:27:58 +02:00
1bad62ab0e Handle unicode when formatting XML. 2013-06-02 20:25:36 +02:00
8d302f91f9 Merge branch 'master' of git://github.com/jargonjustin/httpie into jargonjustin-master 2013-06-02 20:14:51 +02:00
63b61bc811 Add custom Host example. 2013-05-20 15:31:02 +02:00
5af88756a6 Fixed download ETA for Python 2.6. 2013-05-14 12:49:29 +02:00
7f624e61b5 Use Thread instead of Timer for progress reporting. 2013-05-14 12:49:03 +02:00
6e848b3203 cleanup 2013-05-14 12:14:08 +02:00
8e112a6948 test_download_no_Content_Length 2013-05-13 15:35:12 +02:00
87c59ae561 Added anonymous sessions (--session=/file/path.json). 2013-05-13 14:47:44 +02:00
76eebeac2a 0.6.0-dev 2013-05-13 12:42:16 +02:00
5b9cbcb530 v0.5.1 2013-05-13 12:40:25 +02:00
8ad33d5f6a Changelog 2013-05-13 12:20:54 +02:00
86ac4cdb7b Changelog 2013-05-13 12:20:28 +02:00
e09b74021c Ignore Content-* and If-* request headers.
Those headers are not stored in sessions anymore.

Closes #141.
2013-05-13 11:54:49 +02:00
71e7061014 v0.5.0 2013-04-27 12:03:38 -03:00
bc756cb6a2 Cleanup 2013-04-27 11:57:13 -03:00
63ed4d32a7 Merge remote-tracking branch 'origin/master' 2013-04-17 13:52:02 -03:00
d1b91bfa9c Merge pull request #142 from capncodewash/netrc-example
Added example for .netrc usage (closes #139)
2013-04-17 09:46:50 -07:00
dac79a8efc Added example for .netrc usage (see issue #139 in upstream. 2013-04-17 16:32:55 +01:00
1fc8396c4b Stop the progres reporter thread on error. 2013-04-16 04:55:45 -03:00
6c3b983c18 Tests 2013-04-15 00:56:47 -03:00
cfa7199f0b Added a simple download test. 2013-04-13 15:34:31 -03:00
5a1177d57e Fixed downloads with no Content-Length. 2013-04-13 14:50:46 -03:00
c63a92f9b7 Cleanup 2013-04-12 22:02:34 -03:00
d17e02792b Fixed length progress bar. 2013-04-12 21:49:27 -03:00
fc4f70a900 Colorize stderr on Windows. 2013-04-12 17:15:21 -03:00
1681a4ddd0 TODOs 2013-04-12 15:27:26 -03:00
289e9b844e Fixed Content-Type retrieval for Python 3. 2013-04-12 14:07:21 -03:00
72cf7c2cb7 Fixed tests for Python 2.6. 2013-04-12 13:42:34 -03:00
4d84d77851 Cleanup 2013-04-12 13:09:57 -03:00
1b98505537 Validate download options before setting up streams. 2013-04-12 11:59:23 -03:00
d32acfe2fa Only use Range when already have a partial download. 2013-04-12 11:56:05 -03:00
e8d79c4d8c Docs fix. 2013-04-12 11:37:58 -03:00
38206e9e92 Cleanup 2013-04-12 11:26:42 -03:00
55d5e78324 --download docs (#104). 2013-04-12 11:06:03 -03:00
341272db1e Added support for output redirection with --download (#104). 2013-04-12 11:04:14 -03:00
464b7a36da Tests 2013-04-12 10:20:01 -03:00
9d043eb745 Used Content-Disposition filename (#104). 2013-04-12 10:19:49 -03:00
40bd8f65af Handle KeyboardInterrupt while --download'ing (#104). 2013-04-12 09:08:19 -03:00
347653b369 Performance and progress bar improvements.
#104
2013-04-12 08:59:33 -03:00
ebfce6fb93 Improved progress bar (#104). 2013-04-11 18:51:21 -03:00
674acfe2c2 Cleanup 2013-04-11 16:23:15 -03:00
7ccdece39f Cleanup 2013-04-11 04:00:41 -03:00
e53dcba03e Added Content-Range parsing tests.
#104
2013-04-11 03:49:01 -03:00
486657afa3 Improved Content-Range parsing.
#104
2013-04-11 03:24:59 -03:00
599bc0519f Download resume improvements.
- Set correct Range
- Validate respnse status
- Validate Content-Range

 #104
2013-04-11 02:29:10 -03:00
21613faa5a Progress bar update 2013-04-10 13:07:05 -03:00
36bc64e02f Cleanup. 2013-04-10 12:53:25 -03:00
6e5c696ac9 --json with no data sets Content-Type as well
Closes #137
2013-04-02 11:07:14 -03:00
9b2a293e6e Progress on --download. 2013-03-24 11:23:18 -03:00
b0dd463687 Corrected session info in the README. 2013-03-22 16:26:51 -03:00
bffaee13ff Formatting 2013-03-20 12:07:23 -03:00
30afcea72d Merge pull request #135 from Scorpil/master
Fixed PyPy cookie updating issue

Closes #132
2013-03-20 08:05:23 -07:00
631c54b711 Fixed PyPy cookie updating issue 2013-03-20 11:45:56 +02:00
99f82bbd32 Handle downloads with no Content-Length. 2013-03-07 13:32:48 -03:00
6f64b437b7 Fixed streaming (closes #133) 2013-03-07 12:42:29 -03:00
7774eac3df Fixed unique suffix placement for URLs with a file extension. 2013-03-03 22:35:01 -03:00
8e6c765be2 Initial --download implementation (#104).
Closes #127
2013-03-03 22:17:09 -03:00
f0c42cd089 v0.4.1 2013-02-26 14:37:09 +01:00
5c6cea79a1 Removed a reference to the removed httpie command
Closes #131
2013-02-26 14:31:52 +01:00
2bed81059a Updated README. 2013-02-22 14:04:27 +01:00
be0b2f21d2 v0.4.0 2013-02-22 13:52:50 +01:00
d97a610f7c Added new logo by @claudiatd 2013-02-22 13:51:37 +01:00
5cc5b13555 Removed the management command.
It means that:

    httpie session list
    httpie session edit
    ...

are gone.

It has never been part of a stable release, and since it wasn't
a very useful feature, it's beeing removed now to avoid feature creep.
2013-02-22 13:27:26 +01:00
3043f24733 .gitignore 2013-02-22 13:19:18 +01:00
093dab5896 Multiple headers TODO. 2013-02-22 13:18:18 +01:00
5f42a21cfb Simplified stored session cookie data. 2013-01-22 20:03:28 +01:00
4c45f0d91f Session name escaping. 2013-01-22 20:02:39 +01:00
d7ec7b2217 Fixing tests for Travis. 2013-01-04 03:19:38 +01:00
7817dfbbcc Fixing tests for Travis. 2013-01-04 03:09:21 +01:00
238b2e0441 Fixing tests for Travis. 2013-01-04 03:05:36 +01:00
a93d57b58b Fixed request/response session cookies.
Closes #113.
2013-01-04 02:59:05 +01:00
79c412064a Python 3.3 fixes. 2013-01-03 15:19:21 +01:00
0ae9d7af58 Compatibility with requests v1.0.4 (requests URL params). 2013-01-03 14:42:17 +01:00
80e317fe24 Added Python 3.3 to tox and travis conf. 2013-01-03 14:14:22 +01:00
1481749c22 Use urlsplit instead of urlparse.
Closes #118.
2013-01-03 14:12:27 +01:00
d84d94dd55 Clean up 2013-01-03 13:49:41 +01:00
1913b0d438 Merge branch 'master' of github.com:jkbr/httpie 2012-12-19 12:31:34 +01:00
fe16f425a9 Require Requests v1.0.3. 2012-12-19 12:31:01 +01:00
7ff71a7f10 Revert: Test Python 3.3 on Travis.
3.3 still not supported
2012-12-19 11:56:02 +01:00
4a37d10245 Test Python 3.3 on Travis. 2012-12-19 11:53:26 +01:00
e5edb66ae8 Requests v1.0: Fixed request body access. 2012-12-19 11:37:52 +01:00
2e57c080fd Pretty print XML 2012-12-17 13:21:38 -08:00
1766dd8291 Requests 1.0: session cookies. 2012-12-17 17:18:18 +01:00
675a8b17ad Merge branch 'master' of github.com:jkbr/httpie 2012-12-17 17:14:24 +01:00
69e26b8bc8 Requests 1.0: prefetch; default_headers. 2012-12-17 17:02:27 +01:00
291f520e0c Update README.rst 2012-12-17 12:26:57 +01:00
9ec328ff6f Session commands. 2012-12-11 12:54:34 +01:00
f2d59ba6bd Improved --check-status + HTTP error + stdout redirect warning. 2012-12-05 05:27:11 +01:00
53caf6ae72 Cleanup 2012-12-05 05:06:06 +01:00
8175366f27 PEP8 2012-12-05 04:39:56 +01:00
8190a7c0c6 Fixed httpie session list 2012-12-05 04:36:42 +01:00
4a615e762f Updated session docs. 2012-12-01 18:43:33 +01:00
7426b4b493 RST formatting. 2012-12-01 18:26:15 +01:00
2cdcadd9d5 Added docs for httpie. 2012-12-01 18:25:34 +01:00
18510a9396 Progress on httpie session *. 2012-12-01 18:16:00 +01:00
acf5f063c7 Typo 2012-12-01 16:52:23 +01:00
2cf379df78 Fixed README typo. 2012-12-01 16:20:16 +01:00
dd100c2cc4 Fixed -j & -v & redirected stdout. Closes #109. 2012-12-01 15:55:58 +01:00
444a9fa929 Added httpless to README. 2012-12-01 15:54:36 +01:00
4a24cd25b9 Clean up. 2012-12-01 15:20:14 +01:00
1c5fb89001 Output stream refactoring. 2012-11-09 15:49:23 +01:00
466e1dbedf Updated CHANGELOG (#100). 2012-11-08 22:39:28 +01:00
d87b2aa0e5 Added support for credentials in URL.
Closes #100 🍰
2012-11-08 22:29:54 +01:00
5d969852c7 Added --no-option's and made args more config-friendly. 2012-09-24 06:49:12 +02:00
bbc702fa11 Improved README. 2012-09-24 05:59:52 +02:00
e25d64a610 0.3.0 2012-09-21 05:50:01 +02:00
a41dd7ac6d Allow output redirection on Windows.
Closes #88.
2012-09-21 04:30:59 +02:00
4a6f32a0f4 Documented config.
Also renamed `default_content_type` to `implicit_content_type` .
2012-09-17 03:08:45 +02:00
548bef7dff Added tests for sessions. 2012-09-17 02:15:00 +02:00
6c2001d1f5 Use the HTTPIE_CONFIG_DIR environment variable. 2012-09-17 02:12:16 +02:00
4029dbf309 Added configuration file.
The "default_content_type" option can be set to "form".

Closes #91.
2012-09-17 00:37:36 +02:00
478d654945 Renamed --session-read to --session-read-only. 2012-09-17 00:01:49 +02:00
66bdbc3745 Cleanup. 2012-09-07 12:48:59 +02:00
316e3f45a9 Added --session-read for read-only sessions. 2012-09-07 12:38:52 +02:00
da0eb7db79 Renamed --allow-redirects to --follow. 2012-09-07 11:58:39 +02:00
9338aadd75 Cleanup 2012-09-05 20:22:08 +02:00
dc7d03e6b8 Merge pull request #90 from simonbuchan/898408c20cfab130699cee3bedbae1ad4a1c69b1
Fix --session for Windows (with a Requests patch)
2012-09-04 02:38:34 -07:00
898408c20c Fix sessions for Windows
':' is invalid in a Windows path, and json needs output to support
write(str).
2012-09-04 15:53:27 +12:00
47de4e2c9c Sessions are now host-bound. 2012-08-19 04:58:14 +02:00
f74424ef03 README 2012-08-18 23:11:56 +02:00
8a9cedb16e JSON session data, `httpie' management command. 2012-08-18 23:07:36 +02:00
ff9f23da5b Grouped arguments for a more user-friendly --help. 2012-08-18 06:12:44 +02:00
50810e5bd9 Include data directory location with --debug. 2012-08-18 04:45:29 +02:00
9b586b953b Use %APPDATA% for data on Windows. 2012-08-18 04:36:58 +02:00
149cbc1604 Fixed Solarized style unavailable on Windows.
#87.
2012-08-18 03:54:38 +02:00
d3df59c8af Updated README. 2012-08-17 23:35:36 +02:00
2057e13a1d Updated README. 2012-08-17 23:35:06 +02:00
4957686bcd Updated README. 2012-08-17 23:34:42 +02:00
4c0d7d526f Added initial support for persistent sessions. 2012-08-17 23:30:47 +02:00
0b3bad9c81 Added initial support for persistent sessions. 2012-08-17 23:23:02 +02:00
1ed43c1a1e Semver-compatible versioning. 2012-08-17 21:24:34 +02:00
bf03937f06 Unified output processing options under --pretty.
* --pretty=none instead of --ugly
* --pretty=all instead of --pretty
* --pretty=colors instead of --colors
* --pretty=format instead of --format
2012-08-17 21:15:37 +02:00
4660da949f Fixed colorized output on Windows with Python 3.
Closes #87.
2012-08-17 06:35:18 +02:00
86256af1df Removed non-ASCII characters from README (closes #85). 2012-08-16 18:47:30 +02:00
8bf7f8219c Fixed readme decoding.
Closes #85.
2012-08-16 03:11:15 +02:00
a5522b8233 Revert "Iter body lines to avoid binary false positives."
This reverts commit b92a3a6d95.
2012-08-16 03:06:48 +02:00
b92a3a6d95 Iter body lines to avoid binary false positives.
#84
2012-08-13 23:33:25 +02:00
9098e5b6e8 Updated changelog. 2012-08-12 06:02:13 +02:00
68640a81b3 Use CRLF for headers in the output. 2012-08-10 01:45:07 +02:00
27f08920c4 Improved examples. 2012-08-09 23:36:29 +02:00
c01dd8d64a Added exit status for timed-out requests. 2012-08-09 05:24:58 +02:00
76feea2f68 Added README reStructuredText validation. 2012-08-07 17:20:50 +02:00
22a10aec4a Added --colors and --format.
Closes #59 and #82.
2012-08-07 16:59:49 +02:00
fa334bdf4d Documented --verify. 2012-08-07 15:25:24 +02:00
f6724452cf Skip tests with redirects on Requests 0.13.6. 2012-08-07 15:08:28 +02:00
07de32c406 Version fix. 2012-08-07 15:01:04 +02:00
1fbe7a6121 Improved --debug. 2012-08-07 14:50:51 +02:00
49e44d9b7e Pre-process README.rst so that PyPi renders it. 2012-08-07 14:50:17 +02:00
193683afbb Added proxy docs. 2012-08-07 14:49:43 +02:00
126b1da515 v0.2.8dev 2012-08-07 00:13:27 +02:00
969b310ea9 v0.2.7 2012-08-07 00:12:47 +02:00
dd2c89412c Compatibility with Requests 0.13.6. 2012-08-07 00:07:04 +02:00
381e60f9d8 Extended README. 2012-08-06 23:27:49 +02:00
44e409693b Set JSON Content-Type only with data even with -j. 2012-08-06 22:14:52 +02:00
4e58a3849a Added exit status constants, cleaned up main(). 2012-08-04 19:22:50 +02:00
94c77c9bfc Improved password prompt. 2012-08-04 17:04:36 +02:00
747b87c4e6 Changelog, typos 2012-08-04 16:46:39 +02:00
c7657e3c4b Streamed terminal output
`--stream` can be used to enable streaming also with `--pretty` and to ensure
a more frequent output flushing.
2012-08-04 16:35:31 +02:00
4615011f2e Sort headers by name when prettifying. 2012-08-03 00:58:01 +02:00
4b1a04e5ed Fixed error handling. 2012-08-02 04:33:43 +02:00
e045ca6bd8 Cleanup, CHANGELOG 2012-08-01 23:51:30 +02:00
52e46bedda Take advantage of streaming.
It's now possible to download huge files with HTTPie, and it's often faster than curl and wget!
2012-08-01 23:21:52 +02:00
67ad5980b2 Don't fetch the response body unless needed.
E.g., this will only read the response headers but won't download the
whole file:

    http GET --headers example.org/big-file.avi

The request method is respected (i.e., it doesn't switch to HEAD like
cURL does).
2012-08-01 21:31:06 +02:00
00d85a4b97 Fallback to media subtype if the type is uknown.
Closes #81.
2012-08-01 17:37:23 +02:00
90d34ffd0d Added tests for binary request data. 2012-08-01 00:52:30 +02:00
8905b4fc72 cleanup 2012-07-30 14:23:22 +02:00
a5b98818c8 Syntax-highlighting for examples in the README. 2012-07-30 13:58:13 +02:00
5e7bb1f6dc Syntax-highlighting for examples in the README. 2012-07-30 13:51:28 +02:00
4117d99dd0 Updated screenshot. 2012-07-30 12:37:59 +02:00
49604e7c29 Updated screenshot. 2012-07-30 12:29:56 +02:00
72d371c467 Updated screenshot. 2012-07-30 12:24:11 +02:00
a8c9441f71 Updated screenshot. 2012-07-30 12:11:28 +02:00
e13f65ace1 Updated solarized and switched to Solarized256Style. 2012-07-30 12:11:16 +02:00
a1682d0d2e Added AUTHORS 2012-07-30 12:10:19 +02:00
923a8b71bd Revorked output
Binary now works everywhere. Also added `--output FILE` for Windows.
2012-07-30 10:58:16 +02:00
6eed0d92eb Better error messages. 2012-07-29 07:14:54 +02:00
edf87c3392 Consistent request-response separators. 2012-07-29 06:59:51 +02:00
f73bfea6b8 Validate "file fields (name@/path) require --form / -f". 2012-07-29 06:58:50 +02:00
16635870e3 Removed redundant decode/encode. 2012-07-29 03:52:24 +02:00
f5bc081fda Send filenames with multipart/form-data file uploads. 2012-07-28 13:24:44 +02:00
1efea59a8d Fixed typos. 2012-07-28 06:09:25 +02:00
098e1d3100 Fixed multipart requests output; binary support.
* Bodies of multipart requests are correctly printed (closes #30).
* Binary requests and responses should always work (they are also suppressed
  for terminal output). So things like this work::

     http www.google.com/favicon.ico > favicon.ico
2012-07-28 05:50:12 +02:00
a8ddb8301d Default to https:// if invoked as `https'. 2012-07-27 18:08:33 +02:00
a770d79aef v0.2.7dev 2012-07-26 10:03:34 +02:00
b53d483163 v0.2.6 2012-07-26 09:58:31 +02:00
f45cc0eec0 Added docstrings, refactored input. 2012-07-26 07:23:00 +02:00
f26f2f1438 Mention necessary quoting with :=. #77 2012-07-26 03:24:58 +02:00
851412c698 Improved error messages. 2012-07-26 03:16:42 +02:00
26a76e8243 Clean-up 2012-07-26 00:50:39 +02:00
f5cfd0143b Ensure that full querystring is printent with -v.
The `key==value` parameters weren't included in the Request-Line URL.

Also added tests.
2012-07-25 14:32:57 +02:00
9391c89205 Fixed RST formatting. 2012-07-24 17:22:04 +02:00
76ebe7c6db Short option for --headers is now -h.
-t has been removed, for usage use --help
2012-07-24 17:17:26 +02:00
7af08b6faa Allow multiple fields with the same name.
Applies to form data and URL params:

    http -f url a=1 a=2
    http url a==1 a==2
2012-07-24 17:00:02 +02:00
9944def703 Switched to "==" a the separator for URL params.
Also refactored item escaping.
2012-07-24 14:56:53 +02:00
728a1a195b Updated changelog. 2012-07-24 01:17:07 +02:00
2646ebaaed Replaced --ignore-http-status with --check-status.
The default behaviour now is to exit with 0 on HTTP errors
unless --check-status is set.
2012-07-24 01:09:14 +02:00
fba3912f2e Fixed tests. 2012-07-23 19:49:38 +02:00
0572158ba1 Added exit codes for HTTP 3xx, 4xx, 5xx (3, 4, 5).
Also added `--ignore-http-status` to ensure 0 exit status.

HTTP 3xx result in 0 exit status when `--allow-redirects` is set.
2012-07-23 19:40:50 +02:00
0a673613ef Fixed colorama initialization (#36). 2012-07-21 15:08:28 +02:00
19f760450f Use local httpbin for all tests if available. 2012-07-21 14:37:05 +02:00
35da44309f Undebug 2012-07-21 03:26:48 +02:00
ced6e33230 Fixed tests. 2012-07-21 03:22:47 +02:00
87042f65c9 Added models.Environment().
Refactoring and general cleanup.
2012-07-21 03:14:01 +02:00
c271715a98 Updated flags in README. 2012-07-20 23:51:05 +02:00
57fc606f6b Changed default --print to "b" if stdout piped.
If the output is piped to another program or redirected to a file,
the new default behaviour is to only print the response body.
(It can still be overriden via the ``--print`` flag.)
2012-07-20 23:43:04 +02:00
7d82b853ae Updated installation instructions. 2012-07-20 22:09:53 +02:00
16f23d8147 Improved highlighting of HTTP headers.
Closes #60.
2012-07-20 21:58:41 +02:00
ab7915d9e0 Updated changelog; added stable version README link. 2012-07-19 13:31:02 +03:00
1d6fcfff73 Merge pull request #72 from jakebasile/issue-61
Added Query String Parameters (param=:value).
2012-07-19 03:12:32 -07:00
76a3125153 Updated documentation for query string params. 2012-07-18 21:16:33 -05:00
24d6331d15 Added a bit of testing for the new query string parameters. 2012-07-18 21:16:08 -05:00
06ea36aaa4 Added the ability to pass query string parameters. 2012-07-18 20:44:09 -05:00
c2d70e2bb1 Clean up. 2012-07-17 07:01:30 +02:00
40948dbd2e Updated changelog. 2012-07-17 04:20:37 +02:00
2dba176aa8 Added support for terminal colors under Windows.
Tested on Python 2.7 under Windows 7 with PowerShell and cmd.exe.

Closes #36
2012-07-17 04:06:13 +02:00
54e3e5bca4 README fixes. 2012-07-17 01:55:12 +02:00
533a662651 0.2.6dev 2012-07-17 01:39:30 +02:00
1ce02ebbd5 0.2.5 (bugfixes) 2012-07-17 01:39:02 +02:00
8a7f4c0d6e Fixed tests exist status. 2012-07-17 01:33:18 +02:00
f29c458611 Python 3 fixes. 2012-07-17 01:26:21 +02:00
2d7df0afb4 Fixed AttributeError in Content-Type vendor removal. 2012-07-17 01:11:43 +02:00
16a7d0a719 Fixed accidentally remove __licence__. 2012-07-17 01:11:01 +02:00
0cffda86f6 0.2.5dev 2012-07-17 00:47:42 +02:00
f42ee6da85 0.2.5dev 2012-07-17 00:45:20 +02:00
deeb7cbbac 0.2.4 (bad upload of 0.2.3 to pypi). 2012-07-17 00:44:25 +02:00
12f2fb4a92 Merge branch 'master' of github.com:jkbr/httpie 2012-07-17 00:38:41 +02:00
489bd64295 0.2.4dev 2012-07-17 00:37:53 +02:00
9b8cb42efd 0.2.3 2012-07-17 00:37:13 +02:00
2036337a53 Merge pull request #69 from jokull/master
Prettify vendor+json and vendor+xml Content-Type responses
2012-07-16 15:27:50 -07:00
5ca8bec9ff Add a note on pretty JSON and unicode to changelog
Closes #52
Closes #67
2012-07-17 00:22:39 +02:00
df79792fd9 Added test case to verify unicode output 2012-07-17 00:09:01 +02:00
5a82c79fdf Now non-ascii symbols displayed correctly in the output (not as escape sequences). 2012-07-17 00:08:52 +02:00
05b321d38f Better wording. 2012-07-17 00:06:13 +02:00
681b652bf9 Allow stdin data with password prompt; added tests
Closes #70
2012-07-16 23:41:27 +02:00
85b3a016eb Update README with new --auth behavior. 2012-07-16 04:50:25 -04:00
929ead437a Have --auth prompt for password if omitted. 2012-07-16 04:40:36 -04:00
36de166b28 Simplify vendor extension content-types since they are most likely lexable 2012-07-14 14:27:11 +00:00
7bc2de2f9d Merge pull request #68 from cemaleker/master
Added omitted query string data to request headers.
2012-07-13 17:53:11 -07:00
cb7ead04e2 Added omitted query string data to request headers. 2012-07-14 03:37:24 +03:00
cd2ca41f48 Merge pull request #65 from simono/patch-1
Update README.rst and add links to Ubuntu and Debian Packages.
2012-07-11 06:35:28 -07:00
c71de95505 Update README.rst and add links to Ubuntu and Debian Packages. 2012-07-11 16:32:00 +03:00
6ab03b21b4 Fixed Content-Type for requests with no data.
Closes #62.
2012-07-04 01:39:21 +02:00
50196be0f2 Added support for request payload from a filepath
Content-Type is detected from the filename.

Closes #57.
2012-06-29 00:45:31 +02:00
41d640920c Added more examples. 2012-06-25 14:50:49 +02:00
3179631603 0.2.3dev 2012-06-24 16:45:01 +02:00
2f7921091c 0.2.2 2012-06-24 16:43:03 +02:00
180313d80c Impreved tests. 2012-06-24 04:20:45 +02:00
926d3f5caf Tests, docs, clean-up.
Closes #54.
2012-06-24 03:45:21 +02:00
4613d947a8 Default to POST also when stdin redirected.
+clean up
2012-06-24 01:25:30 +02:00
5a47f00bac Replaced mock.Mock with argparse.Namespace to reduce deps. 2012-06-23 23:54:59 +02:00
0e1affbbc4 Issue #54 Method suggestion proposal 2012-06-17 22:15:07 +04:00
d920f20847 Issue #54 Method suggestion proposal 2012-06-17 22:11:26 +04:00
bca36f0464 Issue #54 Method suggestion proposal 2012-06-17 21:46:56 +04:00
78fff98712 Issue #54 Method suggestion proposal 2012-06-16 20:08:31 +04:00
e06c448a75 README improvements. 2012-06-15 17:32:45 +02:00
9cdbd6b0ec Added a Contribute section to README. 2012-06-15 17:13:40 +02:00
cbc6d02127 Fixed --verbose --form.
Closes #53
2012-06-15 16:47:55 +02:00
284a75fa2f Merge pull request #51 from msabramo/testing
Added support for tox (http://tox.testrun.org/)
2012-06-13 07:48:09 -07:00
b3ea273a21 Add "pypy" to .travis.yml 2012-06-13 07:36:51 -07:00
0d129d5f69 Add tox.ini for tox (http://tox.testrun.org/) 2012-06-13 07:18:12 -07:00
1388206f1a Fix path to tests.py in setup.py to make python setup.py test work 2012-06-13 07:18:11 -07:00
28dbe9f76c Bump version to 0.2.2dev. 2012-06-13 16:02:30 +02:00
a0700c41ad 0.2.1 2012-06-13 16:01:23 +02:00
e175fe9d0e Ensured a new line after the request message in the output. 2012-06-13 15:32:02 +02:00
d544ec3823 Made --verbose work also with requests<0.12.1. 2012-06-13 15:25:05 +02:00
6cf2910de0 Version bump to 0.2.1dev. 2012-06-13 15:24:48 +02:00
f64eb09571 Merge pull request #50 from dair-targ/master
Fixed --verbose flag for newer requests.
2012-06-13 06:14:12 -07:00
126130455e Merge pull request #45 from gandaro/pygments-lexers
Use the Pygments HTTP and JSON lexers
2012-06-13 06:09:11 -07:00
70b3658004 --verbose flag was not working. Here is bugfix. 2012-06-02 23:14:21 +04:00
d89eeb0796 PEP-8 2012-04-28 14:18:59 +02:00
bced559496 use PrettyHttp class; working --headers and --body 2012-04-28 14:16:47 +02:00
4aa86cb438 Use the full capability of HttpLexer 2012-04-26 14:48:38 +02:00
2d7f2c65a2 Use the Pygments HTTP and JSON lexers 2012-04-26 13:05:59 +02:00
34 changed files with 5895 additions and 958 deletions

6
.gitignore vendored
View File

@ -2,3 +2,9 @@ dist
httpie.egg-info
build
*.pyc
.tox
README.html
.coverage
htmlcov
.idea
.DS_Store

View File

@ -2,10 +2,8 @@ language: python
python:
- 2.6
- 2.7
- 3.1
- 3.2
script: python tests/tests.py
- pypy
- 3.3
script: python setup.py test
install:
- pip install requests pygments
- "if [[ $TRAVIS_PYTHON_VERSION == '2.6' ]] || [[ $TRAVIS_PYTHON_VERSION == '3.1' ]]; then pip install argparse; fi"
- pip install . --use-mirrors

31
AUTHORS.rst Normal file
View File

@ -0,0 +1,31 @@
==============
HTTPie authors
==============
* `Jakub Roztocil <https://github.com/jkbr>`_
Patches and ideas
-----------------
* `Cláudia T. Delgado <https://github.com/claudiatd>`_ (logo)
* `Hank Gay <https://github.com/gthank>`_
* `Jake Basile <https://github.com/jakebasile>`_
* `Vladimir Berkutov <https://github.com/dair-targ>`_
* `Jakob Kramer <https://github.com/gandaro>`_
* `Chris Faulkner <https://github.com/faulkner>`_
* `Alen Mujezinovic <https://github.com/flashingpumpkin>`_
* `Praful Mathur <https://github.com/tictactix>`_
* `Marc Abramowitz <https://github.com/msabramo>`_
* `Ismail Badawi <https://github.com/isbadawi>`_
* `Laurent Bachelier <https://github.com/laurentb>`_
* `Isman Firmansyah <https://github.com/iromli>`_
* `Simon Olofsson <https://github.com/simono>`_
* `Churkin Oleg <https://github.com/Bahus>`_
* `Jökull Sólberg Auðunsson <https://github.com/jokull>`_
* `Matthew M. Boedicker <https://github.com/mmb>`_
* `marblar <https://github.com/marblar>`_
* `Tomek Wójcik <https://github.com/tomekwojcik>`_
* `Davey Shafik <https://github.com/dshafik>`_
* `cido <https://github.com/cido>`_
* `Justin Bonnar <https://github.com/jargonjustin>`_

1422
README.rst

File diff suppressed because it is too large Load Diff

Binary file not shown.

Before

Width:  |  Height:  |  Size: 135 KiB

After

Width:  |  Height:  |  Size: 446 KiB

View File

@ -1,7 +1,19 @@
"""
HTTPie - cURL for humans.
HTTPie - a CLI, cURL-like tool for humans.
"""
__author__ = 'Jakub Roztocil'
__version__ = '0.2.0'
__version__ = '0.7.1'
__licence__ = 'BSD'
class ExitStatus:
"""Exit status code constants."""
OK = 0
ERROR = 1
ERROR_TIMEOUT = 2
# Used only when requested with --check-status:
ERROR_HTTP_3XX = 3
ERROR_HTTP_4XX = 4
ERROR_HTTP_5XX = 5

View File

@ -1,122 +1,10 @@
#!/usr/bin/env python
"""The main entry point. Invoke as `http' or `python -m httpie'.
"""
import sys
import json
import requests
from requests.compat import str
from . import httpmessage
from . import cliparse
from . import cli
from . import pretty
TYPE_FORM = 'application/x-www-form-urlencoded; charset=utf-8'
TYPE_JSON = 'application/json; charset=utf-8'
def _get_response(parser, args, stdin, stdin_isatty):
if not stdin_isatty:
if args.data:
parser.error('Request body (stdin) and request '
'data (key=value) cannot be mixed.')
args.data = stdin.read()
if args.json or (not args.form and args.data):
# JSON
if not args.files and (
'Content-Type' not in args.headers
and (args.data or args.json)):
args.headers['Content-Type'] = TYPE_JSON
if stdin_isatty:
# Serialize the parsed data.
args.data = json.dumps(args.data)
if 'Accept' not in args.headers:
# Default Accept to JSON as well.
args.headers['Accept'] = 'application/json'
elif not args.files and 'Content-Type' not in args.headers:
# Form
args.headers['Content-Type'] = TYPE_FORM
# Fire the request.
try:
credentials = None
if args.auth:
auth_type = (requests.auth.HTTPDigestAuth
if args.auth_type == 'digest'
else requests.auth.HTTPBasicAuth)
credentials = auth_type(args.auth.key, args.auth.value)
return requests.request(
method=args.method.lower(),
url=args.url if '://' in args.url else 'http://%s' % args.url,
headers=args.headers,
data=args.data,
verify={'yes': True, 'no': False}.get(args.verify, args.verify),
timeout=args.timeout,
auth=credentials,
proxies=dict((p.key, p.value) for p in args.proxy),
files=args.files,
allow_redirects=args.allow_redirects,
)
except (KeyboardInterrupt, SystemExit):
sys.stderr.write('\n')
sys.exit(1)
except Exception as e:
if args.traceback:
raise
sys.stderr.write(str(e.message) + '\n')
sys.exit(1)
def _get_output(args, stdout_isatty, response):
do_prettify = (args.prettify is True or
(args.prettify == cliparse.PRETTIFY_STDOUT_TTY_ONLY
and stdout_isatty))
do_output_request = (cliparse.OUT_REQ_HEADERS in args.output_options
or cliparse.OUT_REQ_BODY in args.output_options)
do_output_response = (cliparse.OUT_RESP_HEADERS in args.output_options
or cliparse.OUT_RESP_BODY in args.output_options)
prettifier = pretty.PrettyHttp(args.style) if do_prettify else None
output = []
if do_output_request:
output.append(httpmessage.format(
message=httpmessage.from_request(response.request),
prettifier=prettifier,
with_headers=cliparse.OUT_REQ_HEADERS in args.output_options,
with_body=cliparse.OUT_REQ_BODY in args.output_options
))
if do_output_response:
output.append('\n')
if do_output_response:
output.append(httpmessage.format(
message=httpmessage.from_response(response),
prettifier=prettifier,
with_headers=cliparse.OUT_RESP_HEADERS in args.output_options,
with_body=cliparse.OUT_RESP_BODY in args.output_options
))
output.append('\n')
return ''.join(output)
def main(args=None,
stdin=sys.stdin, stdin_isatty=sys.stdin.isatty(),
stdout=sys.stdout, stdout_isatty=sys.stdout.isatty()):
parser = cli.parser
args = parser.parse_args(args if args is not None else sys.argv[1:])
response = _get_response(parser, args, stdin, stdin_isatty)
output = _get_output(args, stdout_isatty, response)
output_bytes = output.encode('utf8')
f = (stdout.buffer if hasattr(stdout, 'buffer') else stdout)
f.write(output_bytes)
from .core import main
if __name__ == '__main__':
main()
sys.exit(main())

View File

@ -1,203 +1,534 @@
"""
CLI definition.
"""CLI arguments definition.
NOTE: the CLI interface may change before reaching v1.0.
"""
from . import pretty
from textwrap import dedent, wrap
#noinspection PyCompatibility
from argparse import (RawDescriptionHelpFormatter, FileType,
OPTIONAL, ZERO_OR_MORE, SUPPRESS)
from . import __doc__
from . import __version__
from . import cliparse
from .plugins.builtin import BuiltinAuthPlugin
from .plugins import plugin_manager
from .sessions import DEFAULT_SESSIONS_DIR
from .output import AVAILABLE_STYLES, DEFAULT_STYLE
from .input import (Parser, AuthCredentialsArgType, KeyValueArgType,
SEP_PROXY, SEP_CREDENTIALS, SEP_GROUP_ITEMS,
OUT_REQ_HEAD, OUT_REQ_BODY, OUT_RESP_HEAD,
OUT_RESP_BODY, OUTPUT_OPTIONS, OUTPUT_OPTIONS_DEFAULT,
PRETTY_MAP, PRETTY_STDOUT_TTY_ONLY, SessionNameValidator)
def _(text):
"""Normalize white space."""
return ' '.join(text.strip().split())
class HTTPieHelpFormatter(RawDescriptionHelpFormatter):
"""A nicer help formatter.
Help for arguments can be indented and contain new lines.
It will be de-dented and arguments in the help
will be separated by a blank line for better readability.
desc = '%s <http://httpie.org>'
parser = cliparse.HTTPieArgumentParser(description=desc % __doc__.strip(),)
parser.add_argument('--version', action='version', version=__version__)
"""
def __init__(self, max_help_position=6, *args, **kwargs):
# A smaller indent for args help.
kwargs['max_help_position'] = max_help_position
super(HTTPieHelpFormatter, self).__init__(*args, **kwargs)
def _split_lines(self, text, width):
text = dedent(text).strip() + '\n\n'
return text.splitlines()
parser = Parser(
formatter_class=HTTPieHelpFormatter,
description='%s <http://httpie.org>' % __doc__.strip(),
epilog=dedent("""
For every --OPTION there is also a --no-OPTION that reverts OPTION
to its default value.
Suggestions and bug reports are greatly appreciated:
https://github.com/jkbr/httpie/issues
""")
)
#######################################################################
# Positional arguments.
#######################################################################
positional = parser.add_argument_group(
title='Positional arguments',
description=dedent("""
These arguments come after any flags and in the order they are listed here.
Only URL is required.
""")
)
positional.add_argument(
'method',
metavar='METHOD',
nargs=OPTIONAL,
default=None,
help="""
The HTTP method to be used for the request (GET, POST, PUT, DELETE, ...).
This argument can be omitted in which case HTTPie will use POST if there
is some data to be sent, otherwise GET:
$ http example.org # => GET
$ http example.org hello=world # => POST
"""
)
positional.add_argument(
'url',
metavar='URL',
help="""
The scheme defaults to 'http://' if the URL does not include one.
"""
)
positional.add_argument(
'items',
metavar='REQUEST ITEM',
nargs=ZERO_OR_MORE,
type=KeyValueArgType(*SEP_GROUP_ITEMS),
help=r"""
Optional key-value pairs to be included in the request. The separator used
determines the type:
':' HTTP headers:
Referer:http://httpie.org Cookie:foo=bar User-Agent:bacon/1.0
'==' URL parameters to be appended to the request URI:
search==httpie
'=' Data fields to be serialized into a JSON object (with --json, -j)
or form data (with --form, -f):
name=HTTPie language=Python description='CLI HTTP client'
'@' Form file fields (only with --form, -f):
cs@~/Documents/CV.pdf
':=' Non-string JSON data fields (only with --json, -j):
awesome:=true amount:=42 colors:='["red", "green", "blue"]'
You can use a backslash to escape a colliding separator in the field name:
field-name-with\:colon=value
"""
)
#######################################################################
# Content type.
#############################################
#######################################################################
group_type = parser.add_mutually_exclusive_group(required=False)
group_type.add_argument(
'--json', '-j', action='store_true',
help=_('''
(default) Data items are serialized as a JSON object.
The Content-Type and Accept headers
are set to application/json (if not set via the command line).
''')
content_type = parser.add_argument_group(
title='Predefined content types',
description=None
)
group_type.add_argument(
'--form', '-f', action='store_true',
help=_('''
Data items are serialized as form fields.
The Content-Type is set to application/x-www-form-urlencoded (if not specifid).
The presence of any file fields results into a multipart/form-data request.
''')
content_type.add_argument(
'--json', '-j',
action='store_true',
help="""
(default) Data items from the command line are serialized as a JSON object.
The Content-Type and Accept headers are set to application/json
(if not specified).
"""
)
content_type.add_argument(
'--form', '-f',
action='store_true',
help="""
Data items from the command line are serialized as form fields.
The Content-Type is set to application/x-www-form-urlencoded (if not
specified). The presence of any file fields results in a
multipart/form-data request.
"""
)
# Output options.
#############################################
#######################################################################
# Output processing
#######################################################################
parser.add_argument(
'--traceback', action='store_true', default=False,
help=_('''
Print exception traceback should one occur.
''')
output_processing = parser.add_argument_group(title='Output processing')
output_processing.add_argument(
'--pretty',
dest='prettify',
default=PRETTY_STDOUT_TTY_ONLY,
choices=sorted(PRETTY_MAP.keys()),
help="""
Controls output processing. The value can be "none" to not prettify
the output (default for redirected output), "all" to apply both colors
and formatting (default for terminal output), "colors", or "format".
"""
)
output_processing.add_argument(
'--style', '-s',
dest='style',
metavar='STYLE',
default=DEFAULT_STYLE,
choices=AVAILABLE_STYLES,
help="""
Output coloring style (default is "{default}"). On of:
{available}
For this option to work properly, please make sure that the $TERM
environment variable is set to "xterm-256color" or similar
(e.g., via `export TERM=xterm-256color' in your ~/.bashrc).
"""
.format(
default=DEFAULT_STYLE,
available='\n'.join(
'{0: >20}'.format(line.strip())
for line in
wrap(' '.join(sorted(AVAILABLE_STYLES)), 60)
),
)
)
prettify = parser.add_mutually_exclusive_group(required=False)
prettify.add_argument(
'--pretty', dest='prettify', action='store_true',
default=cliparse.PRETTIFY_STDOUT_TTY_ONLY,
help=_('''
If stdout is a terminal, the response is prettified
by default (colorized and indented if it is JSON).
This flag ensures prettifying even when stdout is redirected.
''')
)
prettify.add_argument(
'--ugly', '-u', dest='prettify', action='store_false',
help=_('''
Do not prettify the response.
''')
)
output_options = parser.add_mutually_exclusive_group(required=False)
output_options.add_argument('--print', '-p', dest='output_options',
default=cliparse.OUT_RESP_HEADERS + cliparse.OUT_RESP_BODY,
help=_('''
String specifying what should the output contain.
"{request_headers}" stands for the request headers and
"{request_body}" for the request body.
"{response_headers}" stands for the response headers and
"{response_body}" for response the body.
Defaults to "hb" which means that the whole response
(headers and body) is printed.
'''.format(
request_headers=cliparse.OUT_REQ_HEADERS,
request_body=cliparse.OUT_REQ_BODY,
response_headers=cliparse.OUT_RESP_HEADERS,
response_body=cliparse.OUT_RESP_BODY,
))
#######################################################################
# Output options
#######################################################################
output_options = parser.add_argument_group(title='Output options')
output_options.add_argument(
'--print', '-p',
dest='output_options',
metavar='WHAT',
help="""
String specifying what the output should contain:
'{req_head}' request headers
'{req_body}' request body
'{res_head}' response headers
'{res_body}' response body
The default behaviour is '{default}' (i.e., the response headers and body
is printed), if standard output is not redirected. If the output is piped
to another program or to a file, then only the response body is printed
by default.
"""
.format(
req_head=OUT_REQ_HEAD,
req_body=OUT_REQ_BODY,
res_head=OUT_RESP_HEAD,
res_body=OUT_RESP_BODY,
default=OUTPUT_OPTIONS_DEFAULT,
)
)
output_options.add_argument(
'--verbose', '-v', dest='output_options',
action='store_const', const=''.join(cliparse.OUTPUT_OPTIONS),
help=_('''
Print the whole request as well as the response.
Shortcut for --print={0}.
'''.format(''.join(cliparse.OUTPUT_OPTIONS)))
'--verbose', '-v',
dest='output_options',
action='store_const',
const=''.join(OUTPUT_OPTIONS),
help="""
Print the whole request as well as the response. Shortcut for --print={0}.
"""
.format(''.join(OUTPUT_OPTIONS))
)
output_options.add_argument(
'--headers', '-t', dest='output_options',
action='store_const', const=cliparse.OUT_RESP_HEADERS,
help=_('''
Print only the response headers.
Shortcut for --print={0}.
'''.format(cliparse.OUT_RESP_HEADERS))
'--headers', '-h',
dest='output_options',
action='store_const',
const=OUT_RESP_HEAD,
help="""
Print only the response headers. Shortcut for --print={0}.
"""
.format(OUT_RESP_HEAD)
)
output_options.add_argument(
'--body', '-b', dest='output_options',
action='store_const', const=cliparse.OUT_RESP_BODY,
help=_('''
Print only the response body.
Shortcut for --print={0}.
'''.format(cliparse.OUT_RESP_BODY))
'--body', '-b',
dest='output_options',
action='store_const',
const=OUT_RESP_BODY,
help="""
Print only the response body. Shortcut for --print={0}.
"""
.format(OUT_RESP_BODY)
)
parser.add_argument(
'--style', '-s', dest='style', default='solarized', metavar='STYLE',
choices=pretty.AVAILABLE_STYLES,
help=_('''
Output coloring style, one of %s. Defaults to solarized.
For this option to work properly, please make sure that the
$TERM environment variable is set to "xterm-256color" or similar
(e.g., via `export TERM=xterm-256color' in your ~/.bashrc).
''') % ', '.join(sorted(pretty.AVAILABLE_STYLES))
output_options.add_argument(
'--stream', '-S',
action='store_true',
default=False,
help="""
Always stream the output by line, i.e., behave like `tail -f'.
Without --stream and with --pretty (either set or implied),
HTTPie fetches the whole response before it outputs the processed data.
Set this option when you want to continuously display a prettified
long-lived response, such as one from the Twitter streaming API.
It is useful also without --pretty: It ensures that the output is flushed
more often and in smaller chunks.
"""
)
output_options.add_argument(
'--output', '-o',
type=FileType('a+b'),
dest='output_file',
metavar='FILE',
help="""
Save output to FILE. If --download is set, then only the response body is
saved to the file. Other parts of the HTTP exchange are printed to stderr.
"""
)
output_options.add_argument(
'--download', '-d',
action='store_true',
default=False,
help="""
Do not print the response body to stdout. Rather, download it and store it
in a file. The filename is guessed unless specified with --output
[filename]. This action is similar to the default behaviour of wget.
"""
)
output_options.add_argument(
'--continue', '-c',
dest='download_resume',
action='store_true',
default=False,
help="""
Resume an interrupted download. Note that the --output option needs to be
specified as well.
"""
)
#######################################################################
# Sessions
#######################################################################
sessions = parser.add_argument_group(title='Sessions')\
.add_mutually_exclusive_group(required=False)
session_name_validator = SessionNameValidator(
'Session name contains invalid characters.'
)
sessions.add_argument(
'--session',
metavar='SESSION_NAME_OR_PATH',
type=session_name_validator,
help="""
Create, or reuse and update a session. Within a session, custom headers,
auth credential, as well as any cookies sent by the server persist between
requests.
Session files are stored in:
{session_dir}/<HOST>/<SESSION_NAME>.json.
"""
.format(session_dir=DEFAULT_SESSIONS_DIR)
)
sessions.add_argument(
'--session-read-only',
metavar='SESSION_NAME_OR_PATH',
type=session_name_validator,
help="""
Create or read a session without updating it form the request/response
exchange.
"""
)
#######################################################################
# Authentication
#######################################################################
# ``requests.request`` keyword arguments.
parser.add_argument(
'--auth', '-a', help='username:password',
type=cliparse.KeyValueType(cliparse.SEP_COMMON)
auth = parser.add_argument_group(title='Authentication')
auth.add_argument(
'--auth', '-a',
metavar='USER[:PASS]',
type=AuthCredentialsArgType(SEP_CREDENTIALS),
help="""
If only the username is provided (-a username), HTTPie will prompt
for the password.
""",
)
parser.add_argument(
'--auth-type', choices=['basic', 'digest'],
help=_('The authentication mechanism to be used. Defaults to "basic".')
)
_auth_plugins = plugin_manager.get_auth_plugins()
auth.add_argument(
'--auth-type',
choices=[plugin.auth_type for plugin in _auth_plugins],
default=_auth_plugins[0].auth_type,
help="""
The authentication mechanism to be used. Defaults to "{default}".
parser.add_argument(
'--verify', default='yes',
help=_('''
Set to "no" to skip checking the host\'s SSL certificate.
You can also pass the path to a CA_BUNDLE
file for private certs. You can also set
the REQUESTS_CA_BUNDLE environment variable.
Defaults to "yes".
''')
)
parser.add_argument(
'--proxy', default=[], action='append',
type=cliparse.KeyValueType(cliparse.SEP_COMMON),
help=_('''
String mapping protocol to the URL of the proxy
(e.g. http:foo.bar:3128).
''')
)
parser.add_argument(
'--allow-redirects', default=False, action='store_true',
help=_('''
Set this flag if full redirects are allowed
(e.g. re-POST-ing of data at new ``Location``)
''')
)
parser.add_argument(
'--timeout', type=float,
help=_('''
Float describes the timeout of the request
(Use socket.setdefaulttimeout() as fallback).
''')
{types}
"""
.format(default=_auth_plugins[0].auth_type, types='\n '.join(
'"{type}": {name}{package}{description}'.format(
type=plugin.auth_type,
name=plugin.name,
package=(
'' if issubclass(plugin, BuiltinAuthPlugin)
else ' (provided by %s)' % plugin.package_name
),
description=(
'' if not plugin.description else
'\n ' + ('\n '.join(wrap(plugin.description)))
)
)
for plugin in _auth_plugins
)),
)
# Positional arguments.
#############################################
#######################################################################
# Network
#######################################################################
parser.add_argument(
'method', metavar='METHOD',
help=_('''
The HTTP method to be used for the request
(GET, POST, PUT, DELETE, PATCH, ...).
''')
network = parser.add_argument_group(title='Network')
network.add_argument(
'--proxy',
default=[],
action='append',
metavar='PROTOCOL:HOST',
type=KeyValueArgType(SEP_PROXY),
help="""
String mapping protocol to the URL of the proxy (e.g. http:foo.bar:3128).
You can specify multiple proxies with different protocols.
"""
)
parser.add_argument(
'url', metavar='URL',
help=_('''
The protocol defaults to http:// if the
URL does not include one.
''')
network.add_argument(
'--follow',
default=False,
action='store_true',
help="""
Set this flag if full redirects are allowed (e.g. re-POST-ing of data at
new Location).
"""
)
parser.add_argument(
'items', nargs='*',
metavar='ITEM',
type=cliparse.KeyValueType(
cliparse.SEP_COMMON,
cliparse.SEP_DATA,
cliparse.SEP_DATA_RAW_JSON,
cliparse.SEP_FILES
),
help=_('''
A key-value pair whose type is defined by the separator used. It can be an
HTTP header (header:value),
a data field to be used in the request body (field_name=value),
a raw JSON data field (field_name:=value)
or a file field (field_name@/path/to/file).
You can use a backslash to escape a colliding separator in the field name.
''')
network.add_argument(
'--verify',
default='yes',
help="""
Set to "no" to skip checking the host's SSL certificate. You can also pass
the path to a CA_BUNDLE file for private certs. You can also set the
REQUESTS_CA_BUNDLE environment variable. Defaults to "yes".
"""
)
network.add_argument(
'--timeout',
type=float,
default=30,
metavar='SECONDS',
help="""
The connection timeout of the request in seconds. The default value is
30 seconds.
"""
)
network.add_argument(
'--check-status',
default=False,
action='store_true',
help="""
By default, HTTPie exits with 0 when no network or other fatal errors
occur. This flag instructs HTTPie to also check the HTTP status code and
exit with an error if the status indicates one.
When the server replies with a 4xx (Client Error) or 5xx (Server Error)
status code, HTTPie exits with 4 or 5 respectively. If the response is a
3xx (Redirect) and --follow hasn't been set, then the exit status is 3.
Also an error message is written to stderr if stdout is redirected.
"""
)
#######################################################################
# Troubleshooting
#######################################################################
troubleshooting = parser.add_argument_group(title='Troubleshooting')
troubleshooting.add_argument(
'--ignore-stdin',
action='store_true',
default=False,
help="""
Do not attempt to read stdin.
"""
)
troubleshooting.add_argument(
'--help',
action='help',
default=SUPPRESS,
help="""
Show this help message and exit.
"""
)
troubleshooting.add_argument(
'--version',
action='version',
version=__version__,
help="""
Show version and exit.
"""
)
troubleshooting.add_argument(
'--traceback',
action='store_true',
default=False,
help="""
Prints exception traceback should one occur.
"""
)
troubleshooting.add_argument(
'--debug',
action='store_true',
default=False,
help="""
Prints exception traceback should one occur, and also other information
that is useful for debugging HTTPie itself and for reporting bugs.
"""
)

94
httpie/client.py Normal file
View File

@ -0,0 +1,94 @@
import json
import sys
from pprint import pformat
import requests
from . import sessions
from . import __version__
from .plugins import plugin_manager
FORM = 'application/x-www-form-urlencoded; charset=utf-8'
JSON = 'application/json; charset=utf-8'
DEFAULT_UA = 'HTTPie/%s' % __version__
def get_response(args, config_dir):
"""Send the request and return a `request.Response`."""
requests_kwargs = get_requests_kwargs(args)
if args.debug:
sys.stderr.write('\n>>> requests.request(%s)\n\n'
% pformat(requests_kwargs))
if not args.session and not args.session_read_only:
response = requests.request(**requests_kwargs)
else:
response = sessions.get_response(
args=args,
config_dir=config_dir,
session_name=args.session or args.session_read_only,
requests_kwargs=requests_kwargs,
read_only=bool(args.session_read_only),
)
return response
def get_requests_kwargs(args):
"""Translate our `args` into `requests.request` keyword arguments."""
implicit_headers = {
'User-Agent': DEFAULT_UA
}
auto_json = args.data and not args.form
# FIXME: Accept is set to JSON with `http url @./file.txt`.
if args.json or auto_json:
implicit_headers['Accept'] = 'application/json'
if args.json or (auto_json and args.data):
implicit_headers['Content-Type'] = JSON
if isinstance(args.data, dict):
if args.data:
args.data = json.dumps(args.data)
else:
# We need to set data to an empty string to prevent requests
# from assigning an empty list to `response.request.data`.
args.data = ''
elif args.form and not args.files:
# If sending files, `requests` will set
# the `Content-Type` for us.
implicit_headers['Content-Type'] = FORM
for name, value in implicit_headers.items():
if name not in args.headers:
args.headers[name] = value
credentials = None
if args.auth:
auth_plugin = plugin_manager.get_auth_plugin(args.auth_type)()
credentials = auth_plugin.get_auth(args.auth.key, args.auth.value)
kwargs = {
'stream': True,
'method': args.method.lower(),
'url': args.url,
'headers': args.headers,
'data': args.data,
'verify': {
'yes': True,
'no': False
}.get(args.verify, args.verify),
'timeout': args.timeout,
'auth': credentials,
'proxies': dict((p.key, p.value) for p in args.proxy),
'files': args.files,
'allow_redirects': args.follow,
'params': args.params,
}
return kwargs

View File

@ -1,165 +0,0 @@
"""
CLI argument parsing logic.
"""
import os
import json
import re
from collections import namedtuple
try:
from collections import OrderedDict
except ImportError:
OrderedDict = dict
import argparse
from requests.structures import CaseInsensitiveDict
from . import __version__
SEP_COMMON = ':'
SEP_HEADERS = SEP_COMMON
SEP_DATA = '='
SEP_DATA_RAW_JSON = ':='
SEP_FILES = '@'
OUT_REQ_HEADERS = 'H'
OUT_REQ_BODY = 'B'
OUT_RESP_HEADERS = 'h'
OUT_RESP_BODY = 'b'
OUTPUT_OPTIONS = [OUT_REQ_HEADERS,
OUT_REQ_BODY,
OUT_RESP_HEADERS,
OUT_RESP_BODY]
PRETTIFY_STDOUT_TTY_ONLY = object()
DEFAULT_UA = 'HTTPie/%s' % __version__
class HTTPieArgumentParser(argparse.ArgumentParser):
def parse_args(self, args=None, namespace=None):
args = super(HTTPieArgumentParser, self).parse_args(args, namespace)
self._validate_output_options(args)
self._validate_auth_options(args)
self._parse_items(args)
return args
def _parse_items(self, args):
args.headers = CaseInsensitiveDict()
args.headers['User-Agent'] = DEFAULT_UA
args.data = OrderedDict()
args.files = OrderedDict()
try:
parse_items(items=args.items, headers=args.headers,
data=args.data, files=args.files)
except ParseError as e:
if args.traceback:
raise
self.error(e.message)
if args.files and not args.form:
# We could just switch to --form automatically here,
# but I think it's better to make it explicit.
self.error(
' You need to set the --form / -f flag to'
' to issue a multipart request. File fields: %s'
% ','.join(args.files.keys()))
def _validate_output_options(self, args):
unknown_output_options = set(args.output_options) - set(OUTPUT_OPTIONS)
if unknown_output_options:
self.error('Unknown output options: %s' % ','.join(unknown_output_options))
def _validate_auth_options(self, args):
if args.auth_type and not args.auth:
self.error('--auth-type can only be used with --auth')
class ParseError(Exception):
pass
KeyValue = namedtuple('KeyValue', ['key', 'value', 'sep', 'orig'])
class KeyValueType(object):
"""A type used with `argparse`."""
def __init__(self, *separators):
self.separators = separators
self.escapes = ['\\\\' + sep for sep in separators]
def __call__(self, string):
found = {}
found_escapes = []
for esc in self.escapes:
found_escapes += [m.span() for m in re.finditer(esc, string)]
for sep in self.separators:
matches = re.finditer(sep, string)
for match in matches:
start, end = match.span()
inside_escape = False
for estart, eend in found_escapes:
if start >= estart and end <= eend:
inside_escape = True
break
if not inside_escape:
found[start] = sep
if not found:
raise argparse.ArgumentTypeError(
'"%s" is not a valid value' % string)
# split the string at the earliest non-escaped separator.
seploc = min(found.keys())
sep = found[seploc]
key = string[:seploc]
value = string[seploc + len(sep):]
# remove escape chars
for sepstr in self.separators:
key = key.replace('\\' + sepstr, sepstr)
value = value.replace('\\' + sepstr, sepstr)
return KeyValue(key=key, value=value, sep=sep, orig=string)
def parse_items(items, data=None, headers=None, files=None):
"""Parse `KeyValueType` `items` into `data`, `headers` and `files`."""
if headers is None:
headers = {}
if data is None:
data = {}
if files is None:
files = {}
for item in items:
value = item.value
key = item.key
if item.sep == SEP_HEADERS:
target = headers
elif item.sep == SEP_FILES:
try:
value = open(os.path.expanduser(item.value), 'r')
except IOError as e:
raise ParseError(
'Invalid argument %r. %s' % (item.orig, e))
if not key:
key = os.path.basename(value.name)
target = files
elif item.sep in [SEP_DATA, SEP_DATA_RAW_JSON]:
if item.sep == SEP_DATA_RAW_JSON:
try:
value = json.loads(item.value)
except ValueError:
raise ParseError('%s is not valid JSON' % item.orig)
target = data
else:
raise ParseError('%s is not valid item' % item.orig)
if key in target:
ParseError('duplicate item %s (%s)' % (item.key, item.orig))
target[key] = value
return headers, data, files

19
httpie/compat.py Normal file
View File

@ -0,0 +1,19 @@
"""
Python 2/3 compatibility.
"""
#noinspection PyUnresolvedReferences
from requests.compat import (
is_windows,
bytes,
str,
is_py3,
is_py26,
)
try:
#noinspection PyUnresolvedReferences,PyCompatibility
from urllib.parse import urlsplit
except ImportError:
#noinspection PyUnresolvedReferences,PyCompatibility
from urlparse import urlsplit

100
httpie/config.py Normal file
View File

@ -0,0 +1,100 @@
import os
import json
import errno
from . import __version__
from .compat import is_windows
DEFAULT_CONFIG_DIR = os.environ.get(
'HTTPIE_CONFIG_DIR',
os.path.expanduser('~/.httpie') if not is_windows else
os.path.expandvars(r'%APPDATA%\\httpie')
)
class BaseConfigDict(dict):
name = None
helpurl = None
about = None
directory = DEFAULT_CONFIG_DIR
def __init__(self, directory=None, *args, **kwargs):
super(BaseConfigDict, self).__init__(*args, **kwargs)
if directory:
self.directory = directory
def __getattr__(self, item):
return self[item]
def _get_path(self):
"""Return the config file path without side-effects."""
return os.path.join(self.directory, self.name + '.json')
@property
def path(self):
"""Return the config file path creating basedir, if needed."""
path = self._get_path()
try:
os.makedirs(os.path.dirname(path), mode=0o700)
except OSError as e:
if e.errno != errno.EEXIST:
raise
return path
@property
def is_new(self):
return not os.path.exists(self._get_path())
def load(self):
try:
with open(self.path, 'rt') as f:
try:
data = json.load(f)
except ValueError as e:
raise ValueError(
'Invalid %s JSON: %s [%s]' %
(type(self).__name__, e.message, self.path)
)
self.update(data)
except IOError as e:
if e.errno != errno.ENOENT:
raise
def save(self):
self['__meta__'] = {
'httpie': __version__
}
if self.helpurl:
self['__meta__']['help'] = self.helpurl
if self.about:
self['__meta__']['about'] = self.about
with open(self.path, 'w') as f:
json.dump(self, f, indent=4, sort_keys=True, ensure_ascii=True)
f.write('\n')
def delete(self):
try:
os.unlink(self.path)
except OSError as e:
if e.errno != errno.ENOENT:
raise
class Config(BaseConfigDict):
name = 'config'
helpurl = 'https://github.com/jkbr/httpie#config'
about = 'HTTPie configuration file'
DEFAULTS = {
'implicit_content_type': 'json',
'default_options': []
}
def __init__(self, *args, **kwargs):
super(Config, self).__init__(*args, **kwargs)
self.update(self.DEFAULTS)

171
httpie/core.py Normal file
View File

@ -0,0 +1,171 @@
"""This module provides the main functionality of HTTPie.
Invocation flow:
1. Read, validate and process the input (args, `stdin`).
2. Create and send a request.
3. Stream, and possibly process and format, the requested parts
of the request-response exchange.
4. Simultaneously write to `stdout`
5. Exit.
"""
import sys
import errno
import requests
from httpie import __version__ as httpie_version
from requests import __version__ as requests_version
from pygments import __version__ as pygments_version
from .compat import str, is_py3
from .client import get_response
from .downloads import Download
from .models import Environment
from .output import build_output_stream, write, write_with_colors_win_py3
from . import ExitStatus
from .plugins import plugin_manager
def get_exit_status(http_status, follow=False):
"""Translate HTTP status code to exit status code."""
if 300 <= http_status <= 399 and not follow:
# Redirect
return ExitStatus.ERROR_HTTP_3XX
elif 400 <= http_status <= 499:
# Client Error
return ExitStatus.ERROR_HTTP_4XX
elif 500 <= http_status <= 599:
# Server Error
return ExitStatus.ERROR_HTTP_5XX
else:
return ExitStatus.OK
def print_debug_info(env):
sys.stderr.writelines([
'HTTPie %s\n' % httpie_version,
'HTTPie data: %s\n' % env.config.directory,
'Requests %s\n' % requests_version,
'Pygments %s\n' % pygments_version,
'Python %s %s\n' % (sys.version, sys.platform)
])
def main(args=sys.argv[1:], env=Environment()):
"""Run the main program and write the output to ``env.stdout``.
Return exit status code.
"""
plugin_manager.load_installed_plugins()
from .cli import parser
if env.config.default_options:
args = env.config.default_options + args
def error(msg, *args, **kwargs):
msg = msg % args
level = kwargs.get('level', 'error')
env.stderr.write('\nhttp: %s: %s\n' % (level, msg))
debug = '--debug' in args
traceback = debug or '--traceback' in args
exit_status = ExitStatus.OK
if debug:
print_debug_info(env)
if args == ['--debug']:
return exit_status
download = None
try:
args = parser.parse_args(args=args, env=env)
if args.download:
args.follow = True # --download implies --follow.
download = Download(
output_file=args.output_file,
progress_file=env.stderr,
resume=args.download_resume
)
download.pre_request(args.headers)
response = get_response(args, config_dir=env.config.directory)
if args.check_status or download:
exit_status = get_exit_status(
http_status=response.status_code,
follow=args.follow
)
if not env.stdout_isatty and exit_status != ExitStatus.OK:
error('HTTP %s %s',
response.raw.status,
response.raw.reason,
level='warning')
write_kwargs = {
'stream': build_output_stream(
args, env, response.request, response),
# This will in fact be `stderr` with `--download`
'outfile': env.stdout,
'flush': env.stdout_isatty or args.stream
}
try:
if env.is_windows and is_py3 and 'colors' in args.prettify:
write_with_colors_win_py3(**write_kwargs)
else:
write(**write_kwargs)
if download and exit_status == ExitStatus.OK:
# Response body download.
download_stream, download_to = download.start(response)
write(
stream=download_stream,
outfile=download_to,
flush=False,
)
download.finish()
if download.interrupted:
exit_status = ExitStatus.ERROR
error('Incomplete download: size=%d; downloaded=%d' % (
download.status.total_size,
download.status.downloaded
))
except IOError as e:
if not traceback and e.errno == errno.EPIPE:
# Ignore broken pipes unless --traceback.
env.stderr.write('\n')
else:
raise
except (KeyboardInterrupt, SystemExit):
if traceback:
raise
env.stderr.write('\n')
exit_status = ExitStatus.ERROR
except requests.Timeout:
exit_status = ExitStatus.ERROR_TIMEOUT
error('Request timed out (%ss).', args.timeout)
except Exception as e:
# TODO: Better distinction between expected and unexpected errors.
# Network errors vs. bugs, etc.
if traceback:
raise
error('%s: %s', type(e).__name__, str(e))
exit_status = ExitStatus.ERROR
finally:
if download and not download.finished:
download.failed()
return exit_status

427
httpie/downloads.py Normal file
View File

@ -0,0 +1,427 @@
# coding=utf-8
"""
Download mode implementation.
"""
from __future__ import division
import os
import re
import sys
import mimetypes
import threading
from time import sleep, time
from mailbox import Message
from .output import RawStream
from .models import HTTPResponse
from .utils import humanize_bytes
from .compat import urlsplit
PARTIAL_CONTENT = 206
CLEAR_LINE = '\r\033[K'
PROGRESS = (
'{percentage: 6.2f} %'
' {downloaded: >10}'
' {speed: >10}/s'
' {eta: >8} ETA'
)
PROGRESS_NO_CONTENT_LENGTH = '{downloaded: >10} {speed: >10}/s'
SUMMARY = 'Done. {downloaded} in {time:0.5f}s ({speed}/s)\n'
SPINNER = '|/-\\'
class ContentRangeError(ValueError):
pass
def parse_content_range(content_range, resumed_from):
"""
Parse and validate Content-Range header.
<http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html>
:param content_range: the value of a Content-Range response header
eg. "bytes 21010-47021/47022"
:param resumed_from: first byte pos. from the Range request header
:return: total size of the response body when fully downloaded.
"""
if content_range is None:
raise ContentRangeError('Missing Content-Range')
pattern = (
'^bytes (?P<first_byte_pos>\d+)-(?P<last_byte_pos>\d+)'
'/(\*|(?P<instance_length>\d+))$'
)
match = re.match(pattern, content_range)
if not match:
raise ContentRangeError(
'Invalid Content-Range format %r' % content_range)
content_range_dict = match.groupdict()
first_byte_pos = int(content_range_dict['first_byte_pos'])
last_byte_pos = int(content_range_dict['last_byte_pos'])
instance_length = (
int(content_range_dict['instance_length'])
if content_range_dict['instance_length']
else None
)
# "A byte-content-range-spec with a byte-range-resp-spec whose
# last- byte-pos value is less than its first-byte-pos value,
# or whose instance-length value is less than or equal to its
# last-byte-pos value, is invalid. The recipient of an invalid
# byte-content-range- spec MUST ignore it and any content
# transferred along with it."
if (first_byte_pos >= last_byte_pos
or (instance_length is not None
and instance_length <= last_byte_pos)):
raise ContentRangeError(
'Invalid Content-Range returned: %r' % content_range)
if (first_byte_pos != resumed_from
or (instance_length is not None
and last_byte_pos + 1 != instance_length)):
# Not what we asked for.
raise ContentRangeError(
'Unexpected Content-Range returned (%r)'
' for the requested Range ("bytes=%d-")'
% (content_range, resumed_from)
)
return last_byte_pos + 1
def filename_from_content_disposition(content_disposition):
"""
Extract and validate filename from a Content-Disposition header.
:param content_disposition: Content-Disposition value
:return: the filename if present and valid, otherwise `None`
"""
# attachment; filename=jkbr-httpie-0.4.1-20-g40bd8f6.tar.gz
msg = Message('Content-Disposition: %s' % content_disposition)
filename = msg.get_filename()
if filename:
# Basic sanitation.
filename = os.path.basename(filename).lstrip('.').strip()
if filename:
return filename
def filename_from_url(url, content_type):
fn = urlsplit(url).path.rstrip('/')
fn = os.path.basename(fn) if fn else 'index'
if '.' not in fn and content_type:
content_type = content_type.split(';')[0]
if content_type == 'text/plain':
# mimetypes returns '.ksh'
ext = '.txt'
else:
ext = mimetypes.guess_extension(content_type)
if ext == '.htm': # Python 3
ext = '.html'
if ext:
fn += ext
return fn
def get_unique_filename(fn, exists=os.path.exists):
attempt = 0
while True:
suffix = '-' + str(attempt) if attempt > 0 else ''
if not exists(fn + suffix):
return fn + suffix
attempt += 1
class Download(object):
def __init__(self, output_file=None,
resume=False, progress_file=sys.stderr):
"""
:param resume: Should the download resume if partial download
already exists.
:type resume: bool
:param output_file: The file to store response body in. If not
provided, it will be guessed from the response.
:type output_file: file
:param progress_file: Where to report download progress.
:type progress_file: file
"""
self._output_file = output_file
self._resume = resume
self._resumed_from = 0
self.finished = False
self.status = Status()
self._progress_reporter = ProgressReporterThread(
status=self.status,
output=progress_file
)
def pre_request(self, request_headers):
"""Called just before the HTTP request is sent.
Might alter `request_headers`.
:type request_headers: dict
"""
# Disable content encoding so that we can resume, etc.
request_headers['Accept-Encoding'] = None
if self._resume:
bytes_have = os.path.getsize(self._output_file.name)
if bytes_have:
# Set ``Range`` header to resume the download
# TODO: Use "If-Range: mtime" to make sure it's fresh?
request_headers['Range'] = 'bytes=%d-' % bytes_have
self._resumed_from = bytes_have
def start(self, response):
"""
Initiate and return a stream for `response` body with progress
callback attached. Can be called only once.
:param response: Initiated response object with headers already fetched
:type response: requests.models.Response
:return: RawStream, output_file
"""
assert not self.status.time_started
try:
total_size = int(response.headers['Content-Length'])
except (KeyError, ValueError, TypeError):
total_size = None
if self._output_file:
if self._resume and response.status_code == PARTIAL_CONTENT:
total_size = parse_content_range(
response.headers.get('Content-Range'),
self._resumed_from
)
else:
self._resumed_from = 0
try:
self._output_file.seek(0)
self._output_file.truncate()
except IOError:
pass # stdout
else:
# TODO: Should the filename be taken from response.history[0].url?
# Output file not specified. Pick a name that doesn't exist yet.
fn = None
if 'Content-Disposition' in response.headers:
fn = filename_from_content_disposition(
response.headers['Content-Disposition'])
if not fn:
fn = filename_from_url(
url=response.url,
content_type=response.headers.get('Content-Type'),
)
self._output_file = open(get_unique_filename(fn), mode='a+b')
self.status.started(
resumed_from=self._resumed_from,
total_size=total_size
)
stream = RawStream(
msg=HTTPResponse(response),
with_headers=False,
with_body=True,
on_body_chunk_downloaded=self.chunk_downloaded,
chunk_size=1024 * 8
)
self._progress_reporter.output.write(
'Downloading %sto "%s"\n' % (
(humanize_bytes(total_size) + ' '
if total_size is not None
else ''),
self._output_file.name
)
)
self._progress_reporter.start()
return stream, self._output_file
def finish(self):
assert not self.finished
self.finished = True
self.status.finished()
def failed(self):
self._progress_reporter.stop()
@property
def interrupted(self):
return (
self.finished
and self.status.total_size
and self.status.total_size != self.status.downloaded
)
def chunk_downloaded(self, chunk):
"""
A download progress callback.
:param chunk: A chunk of response body data that has just
been downloaded and written to the output.
:type chunk: bytes
"""
self.status.chunk_downloaded(len(chunk))
class Status(object):
"""Holds details about the downland status."""
def __init__(self):
self.downloaded = 0
self.total_size = None
self.resumed_from = 0
self.time_started = None
self.time_finished = None
def started(self, resumed_from=0, total_size=None):
assert self.time_started is None
if total_size is not None:
self.total_size = total_size
self.downloaded = self.resumed_from = resumed_from
self.time_started = time()
def chunk_downloaded(self, size):
assert self.time_finished is None
self.downloaded += size
@property
def has_finished(self):
return self.time_finished is not None
def finished(self):
assert self.time_started is not None
assert self.time_finished is None
self.time_finished = time()
class ProgressReporterThread(threading.Thread):
"""
Reports download progress based on its status.
Uses threading to periodically update the status (speed, ETA, etc.).
"""
def __init__(self, status, output, tick=.1, update_interval=1):
"""
:type status: Status
:type output: file
"""
super(ProgressReporterThread, self).__init__()
self.status = status
self.output = output
self._tick = tick
self._update_interval = update_interval
self._spinner_pos = 0
self._status_line = ''
self._prev_bytes = 0
self._prev_time = time()
self._should_stop = threading.Event()
def stop(self):
"""Stop reporting on next tick."""
self._should_stop.set()
def run(self):
while not self._should_stop.is_set():
if self.status.has_finished:
self.sum_up()
break
self.report_speed()
sleep(self._tick)
def report_speed(self):
now = time()
if now - self._prev_time >= self._update_interval:
downloaded = self.status.downloaded
try:
speed = ((downloaded - self._prev_bytes)
/ (now - self._prev_time))
except ZeroDivisionError:
speed = 0
if not self.status.total_size:
self._status_line = PROGRESS_NO_CONTENT_LENGTH.format(
downloaded=humanize_bytes(downloaded),
speed=humanize_bytes(speed),
)
else:
try:
percentage = downloaded / self.status.total_size * 100
except ZeroDivisionError:
percentage = 0
if not speed:
eta = '-:--:--'
else:
s = int((self.status.total_size - downloaded) / speed)
h, s = divmod(s, 60 * 60)
m, s = divmod(s, 60)
eta = '{0}:{1:0>2}:{2:0>2}'.format(h, m, s)
self._status_line = PROGRESS.format(
percentage=percentage,
downloaded=humanize_bytes(downloaded),
speed=humanize_bytes(speed),
eta=eta,
)
self._prev_time = now
self._prev_bytes = downloaded
self.output.write(
CLEAR_LINE
+ ' '
+ SPINNER[self._spinner_pos]
+ ' '
+ self._status_line
)
self.output.flush()
self._spinner_pos = (self._spinner_pos + 1
if self._spinner_pos + 1 != len(SPINNER)
else 0)
def sum_up(self):
actually_downloaded = (self.status.downloaded
- self.status.resumed_from)
time_taken = self.status.time_finished - self.status.time_started
self.output.write(CLEAR_LINE)
self.output.write(SUMMARY.format(
downloaded=humanize_bytes(actually_downloaded),
total=(self.status.total_size
and humanize_bytes(self.status.total_size)),
speed=humanize_bytes(actually_downloaded / time_taken),
time=time_taken,
))
self.output.flush()

View File

@ -1,66 +0,0 @@
from requests.compat import urlparse
class HTTPMessage(object):
"""Model representing an HTTP message."""
def __init__(self, line, headers, body, content_type=None):
# {Request,Status}-Line
self.line = line
self.headers = headers
self.body = body
self.content_type = content_type
def from_request(request):
"""Make an `HTTPMessage` from `requests.models.Request`."""
url = urlparse(request.url)
request_headers = dict(request.headers)
if 'Host' not in request_headers:
request_headers['Host'] = url.netloc
return HTTPMessage(
line='{method} {path} HTTP/1.1'.format(
method=request.method,
path=url.path or '/'),
headers='\n'.join(str('%s: %s') % (name, value)
for name, value
in request_headers.items()),
body=request._enc_data,
content_type=request_headers.get('Content-Type')
)
def from_response(response):
"""Make an `HTTPMessage` from `requests.models.Response`."""
encoding = response.encoding or 'ISO-8859-1'
original = response.raw._original_response
response_headers = response.headers
return HTTPMessage(
line='HTTP/{version} {status} {reason}'.format(
version='.'.join(str(original.version)),
status=original.status, reason=original.reason,),
headers=str(original.msg),
body=response.content.decode(encoding) if response.content else '',
content_type=response_headers.get('Content-Type'))
def format(message, prettifier=None,
with_headers=True, with_body=True):
"""Return a `unicode` representation of `message`. """
bits = []
if with_headers:
if prettifier:
bits.append(prettifier.headers(message.line))
bits.append(prettifier.headers(message.headers))
else:
bits.append(message.line)
bits.append(message.headers)
if with_body and message.body:
bits.append('\n')
if with_body and message.body:
if prettifier and message.content_type:
bits.append(prettifier.body(message.body, message.content_type))
else:
bits.append(message.body)
bits.append('\n')
return '\n'.join(bit.strip() for bit in bits)

592
httpie/input.py Normal file
View File

@ -0,0 +1,592 @@
"""Parsing and processing of CLI input (args, auth credentials, files, stdin).
"""
import os
import sys
import re
import json
import mimetypes
from getpass import getpass
from io import BytesIO
#noinspection PyCompatibility
from argparse import ArgumentParser, ArgumentTypeError, ArgumentError
try:
from collections import OrderedDict
except ImportError:
OrderedDict = dict
# TODO: Use MultiDict for headers once added to `requests`.
# https://github.com/jkbr/httpie/issues/130
from requests.structures import CaseInsensitiveDict
from .compat import urlsplit, str
from .sessions import VALID_SESSION_NAME_PATTERN
HTTP_POST = 'POST'
HTTP_GET = 'GET'
HTTP = 'http://'
HTTPS = 'https://'
# Various separators used in args
SEP_HEADERS = ':'
SEP_CREDENTIALS = ':'
SEP_PROXY = ':'
SEP_DATA = '='
SEP_DATA_RAW_JSON = ':='
SEP_FILES = '@'
SEP_QUERY = '=='
# Separators that become request data
SEP_GROUP_DATA_ITEMS = frozenset([
SEP_DATA,
SEP_DATA_RAW_JSON,
SEP_FILES
])
# Separators allowed in ITEM arguments
SEP_GROUP_ITEMS = frozenset([
SEP_HEADERS,
SEP_QUERY,
SEP_DATA,
SEP_DATA_RAW_JSON,
SEP_FILES
])
# Output options
OUT_REQ_HEAD = 'H'
OUT_REQ_BODY = 'B'
OUT_RESP_HEAD = 'h'
OUT_RESP_BODY = 'b'
OUTPUT_OPTIONS = frozenset([
OUT_REQ_HEAD,
OUT_REQ_BODY,
OUT_RESP_HEAD,
OUT_RESP_BODY
])
# Pretty
PRETTY_MAP = {
'all': ['format', 'colors'],
'colors': ['colors'],
'format': ['format'],
'none': []
}
PRETTY_STDOUT_TTY_ONLY = object()
# Defaults
OUTPUT_OPTIONS_DEFAULT = OUT_RESP_HEAD + OUT_RESP_BODY
OUTPUT_OPTIONS_DEFAULT_STDOUT_REDIRECTED = OUT_RESP_BODY
class Parser(ArgumentParser):
"""Adds additional logic to `argparse.ArgumentParser`.
Handles all input (CLI args, file args, stdin), applies defaults,
and performs extra validation.
"""
def __init__(self, *args, **kwargs):
kwargs['add_help'] = False
super(Parser, self).__init__(*args, **kwargs)
#noinspection PyMethodOverriding
def parse_args(self, env, args=None, namespace=None):
self.env = env
self.args, no_options = super(Parser, self)\
.parse_known_args(args, namespace)
if self.args.debug:
self.args.traceback = True
# Arguments processing and environment setup.
self._apply_no_options(no_options)
self._apply_config()
self._validate_download_options()
self._setup_standard_streams()
self._process_output_options()
self._process_pretty_options()
self._guess_method()
self._parse_items()
if not self.args.ignore_stdin and not env.stdin_isatty:
self._body_from_file(self.env.stdin)
if not (self.args.url.startswith((HTTP, HTTPS))):
# Default to 'https://' if invoked as `https args`.
scheme = HTTPS if self.env.progname == 'https' else HTTP
self.args.url = scheme + self.args.url
self._process_auth()
return self.args
# noinspection PyShadowingBuiltins
def _print_message(self, message, file=None):
# Sneak in our stderr/stdout.
file = {
sys.stdout: self.env.stdout,
sys.stderr: self.env.stderr,
None: self.env.stderr
}.get(file, file)
super(Parser, self)._print_message(message, file)
def _setup_standard_streams(self):
"""
Modify `env.stdout` and `env.stdout_isatty` based on args, if needed.
"""
if not self.env.stdout_isatty and self.args.output_file:
self.error('Cannot use --output, -o with redirected output.')
# FIXME: Come up with a cleaner solution.
if self.args.download:
if not self.env.stdout_isatty:
# Use stdout as tge download output file.
self.args.output_file = self.env.stdout
# With `--download`, we write everything that would normally go to
# `stdout` to `stderr` instead. Let's replace the stream so that
# we don't have to use many `if`s throughout the codebase.
# The response body will be treated separately.
self.env.stdout = self.env.stderr
self.env.stdout_isatty = self.env.stderr_isatty
elif self.args.output_file:
# When not `--download`ing, then `--output` simply replaces
# `stdout`. The file is opened for appending, which isn't what
# we want in this case.
self.args.output_file.seek(0)
self.args.output_file.truncate()
self.env.stdout = self.args.output_file
self.env.stdout_isatty = False
def _apply_config(self):
if (not self.args.json
and self.env.config.implicit_content_type == 'form'):
self.args.form = True
def _process_auth(self):
"""
If only a username provided via --auth, then ask for a password.
Or, take credentials from the URL, if provided.
"""
url = urlsplit(self.args.url)
if self.args.auth:
if not self.args.auth.has_password():
# Stdin already read (if not a tty) so it's save to prompt.
if self.args.ignore_stdin:
self.error('Unable to prompt for passwords because'
' --ignore-stdin is set.')
self.args.auth.prompt_password(url.netloc)
elif url.username is not None:
# Handle http://username:password@hostname/
username, password = url.username, url.password
self.args.auth = AuthCredentials(
key=username,
value=password,
sep=SEP_CREDENTIALS,
orig=SEP_CREDENTIALS.join([username, password])
)
def _apply_no_options(self, no_options):
"""For every `--no-OPTION` in `no_options`, set `args.OPTION` to
its default value. This allows for un-setting of options, e.g.,
specified in config.
"""
invalid = []
for option in no_options:
if not option.startswith('--no-'):
invalid.append(option)
continue
# --no-option => --option
inverted = '--' + option[5:]
for action in self._actions:
if inverted in action.option_strings:
setattr(self.args, action.dest, action.default)
break
else:
invalid.append(option)
if invalid:
msg = 'unrecognized arguments: %s'
self.error(msg % ' '.join(invalid))
def _body_from_file(self, fd):
"""There can only be one source of request data.
Bytes are always read.
"""
if self.args.data:
self.error('Request body (from stdin or a file) and request '
'data (key=value) cannot be mixed.')
self.args.data = getattr(fd, 'buffer', fd).read()
def _guess_method(self):
"""Set `args.method` if not specified to either POST or GET
based on whether the request has data or not.
"""
if self.args.method is None:
# Invoked as `http URL'.
assert not self.args.items
if not self.args.ignore_stdin and not self.env.stdin_isatty:
self.args.method = HTTP_POST
else:
self.args.method = HTTP_GET
# FIXME: False positive, e.g., "localhost" matches but is a valid URL.
elif not re.match('^[a-zA-Z]+$', self.args.method):
# Invoked as `http URL item+'. The URL is now in `args.method`
# and the first ITEM is now incorrectly in `args.url`.
try:
# Parse the URL as an ITEM and store it as the first ITEM arg.
self.args.items.insert(
0,
KeyValueArgType(*SEP_GROUP_ITEMS).__call__(self.args.url)
)
except ArgumentTypeError as e:
if self.args.traceback:
raise
self.error(e.message)
else:
# Set the URL correctly
self.args.url = self.args.method
# Infer the method
has_data = (
(not self.args.ignore_stdin and
not self.env.stdin_isatty) or any(
item.sep in SEP_GROUP_DATA_ITEMS
for item in self.args.items
)
)
self.args.method = HTTP_POST if has_data else HTTP_GET
def _parse_items(self):
"""Parse `args.items` into `args.headers`, `args.data`, `args.params`,
and `args.files`.
"""
self.args.headers = CaseInsensitiveDict()
self.args.data = ParamDict() if self.args.form else OrderedDict()
self.args.files = OrderedDict()
self.args.params = ParamDict()
try:
parse_items(items=self.args.items,
headers=self.args.headers,
data=self.args.data,
files=self.args.files,
params=self.args.params)
except ParseError as e:
if self.args.traceback:
raise
self.error(e.message)
if self.args.files and not self.args.form:
# `http url @/path/to/file`
file_fields = list(self.args.files.keys())
if file_fields != ['']:
self.error(
'Invalid file fields (perhaps you meant --form?): %s'
% ','.join(file_fields))
fn, fd = self.args.files['']
self.args.files = {}
self._body_from_file(fd)
if 'Content-Type' not in self.args.headers:
mime, encoding = mimetypes.guess_type(fn, strict=False)
if mime:
content_type = mime
if encoding:
content_type = '%s; charset=%s' % (mime, encoding)
self.args.headers['Content-Type'] = content_type
def _process_output_options(self):
"""Apply defaults to output options, or validate the provided ones.
The default output options are stdout-type-sensitive.
"""
if not self.args.output_options:
self.args.output_options = (
OUTPUT_OPTIONS_DEFAULT
if self.env.stdout_isatty
else OUTPUT_OPTIONS_DEFAULT_STDOUT_REDIRECTED
)
unknown_output_options = set(self.args.output_options) - OUTPUT_OPTIONS
if unknown_output_options:
self.error(
'Unknown output options: %s' % ','.join(unknown_output_options)
)
if self.args.download and OUT_RESP_BODY in self.args.output_options:
# Response body is always downloaded with --download and it goes
# through a different routine, so we remove it.
self.args.output_options = str(
set(self.args.output_options) - set(OUT_RESP_BODY))
def _process_pretty_options(self):
if self.args.prettify == PRETTY_STDOUT_TTY_ONLY:
self.args.prettify = PRETTY_MAP[
'all' if self.env.stdout_isatty else 'none']
elif self.args.prettify and self.env.is_windows:
self.error('Only terminal output can be colorized on Windows.')
else:
# noinspection PyTypeChecker
self.args.prettify = PRETTY_MAP[self.args.prettify]
def _validate_download_options(self):
if not self.args.download:
if self.args.download_resume:
self.error('--continue only works with --download')
if self.args.download_resume and not (
self.args.download and self.args.output_file):
self.error('--continue requires --output to be specified')
class ParseError(Exception):
pass
class KeyValue(object):
"""Base key-value pair parsed from CLI."""
def __init__(self, key, value, sep, orig):
self.key = key
self.value = value
self.sep = sep
self.orig = orig
def __eq__(self, other):
return self.__dict__ == other.__dict__
class SessionNameValidator(object):
def __init__(self, error_message):
self.error_message = error_message
def __call__(self, value):
# Session name can be a path or just a name.
if (os.path.sep not in value
and not VALID_SESSION_NAME_PATTERN.search(value)):
raise ArgumentError(None, self.error_message)
return value
class KeyValueArgType(object):
"""A key-value pair argument type used with `argparse`.
Parses a key-value arg and constructs a `KeyValue` instance.
Used for headers, form data, and other key-value pair types.
"""
key_value_class = KeyValue
def __init__(self, *separators):
self.separators = separators
def __call__(self, string):
"""Parse `string` and return `self.key_value_class()` instance.
The best of `self.separators` is determined (first found, longest).
Back slash escaped characters aren't considered as separators
(or parts thereof). Literal back slash characters have to be escaped
as well (r'\\').
"""
class Escaped(str):
"""Represents an escaped character."""
def tokenize(s):
"""Tokenize `s`. There are only two token types - strings
and escaped characters:
tokenize(r'foo\=bar\\baz')
=> ['foo', Escaped('='), 'bar', Escaped('\\'), 'baz']
"""
tokens = ['']
esc = False
for c in s:
if esc:
tokens.extend([Escaped(c), ''])
esc = False
else:
if c == '\\':
esc = True
else:
tokens[-1] += c
return tokens
tokens = tokenize(string)
# Sorting by length ensures that the longest one will be
# chosen as it will overwrite any shorter ones starting
# at the same position in the `found` dictionary.
separators = sorted(self.separators, key=len)
for i, token in enumerate(tokens):
if isinstance(token, Escaped):
continue
found = {}
for sep in separators:
pos = token.find(sep)
if pos != -1:
found[pos] = sep
if found:
# Starting first, longest separator found.
sep = found[min(found.keys())]
key, value = token.split(sep, 1)
# Any preceding tokens are part of the key.
key = ''.join(tokens[:i]) + key
# Any following tokens are part of the value.
value += ''.join(tokens[i + 1:])
break
else:
raise ArgumentTypeError(
'"%s" is not a valid value' % string)
return self.key_value_class(
key=key, value=value, sep=sep, orig=string)
class AuthCredentials(KeyValue):
"""Represents parsed credentials."""
def _getpass(self, prompt):
# To allow mocking.
return getpass.getpass(prompt)
def has_password(self):
return self.value is not None
def prompt_password(self, host):
try:
self.value = self._getpass(
'http: password for %s@%s: ' % (self.key, host))
except (EOFError, KeyboardInterrupt):
sys.stderr.write('\n')
sys.exit(0)
class AuthCredentialsArgType(KeyValueArgType):
"""A key-value arg type that parses credentials."""
key_value_class = AuthCredentials
def __call__(self, string):
"""Parse credentials from `string`.
("username" or "username:password").
"""
try:
return super(AuthCredentialsArgType, self).__call__(string)
except ArgumentTypeError:
# No password provided, will prompt for it later.
return self.key_value_class(
key=string,
value=None,
sep=SEP_CREDENTIALS,
orig=string
)
class ParamDict(OrderedDict):
"""Multi-value dict for URL parameters and form data."""
#noinspection PyMethodOverriding
def __setitem__(self, key, value):
""" If `key` is assigned more than once, `self[key]` holds a
`list` of all the values.
This allows having multiple fields with the same name in form
data and URL params.
"""
if key not in self:
super(ParamDict, self).__setitem__(key, value)
else:
if not isinstance(self[key], list):
super(ParamDict, self).__setitem__(key, [self[key]])
self[key].append(value)
def parse_items(items, data=None, headers=None, files=None, params=None):
"""Parse `KeyValue` `items` into `data`, `headers`, `files`,
and `params`.
"""
if headers is None:
headers = CaseInsensitiveDict()
if data is None:
data = OrderedDict()
if files is None:
files = OrderedDict()
if params is None:
params = ParamDict()
for item in items:
value = item.value
key = item.key
if item.sep == SEP_HEADERS:
target = headers
elif item.sep == SEP_QUERY:
target = params
elif item.sep == SEP_FILES:
try:
with open(os.path.expanduser(value), 'rb') as f:
value = (os.path.basename(value),
BytesIO(f.read()))
except IOError as e:
raise ParseError(
'Invalid argument "%s": %s' % (item.orig, e))
target = files
elif item.sep in [SEP_DATA, SEP_DATA_RAW_JSON]:
if item.sep == SEP_DATA_RAW_JSON:
try:
value = json.loads(item.value)
except ValueError:
raise ParseError('"%s" is not valid JSON' % item.orig)
target = data
else:
raise TypeError(item)
target[key] = value
return headers, data, files, params

177
httpie/models.py Normal file
View File

@ -0,0 +1,177 @@
import os
import sys
from .config import DEFAULT_CONFIG_DIR, Config
from .compat import urlsplit, is_windows, bytes, str
class Environment(object):
"""Holds information about the execution context.
Groups various aspects of the environment in a changeable object
and allows for mocking.
"""
is_windows = is_windows
progname = os.path.basename(sys.argv[0])
if progname not in ['http', 'https']:
progname = 'http'
config_dir = DEFAULT_CONFIG_DIR
# Can be set to 0 to disable colors completely.
colors = 256 if '256color' in os.environ.get('TERM', '') else 88
stdin = sys.stdin
stdin_isatty = sys.stdin.isatty()
stdout_isatty = sys.stdout.isatty()
stderr_isatty = sys.stderr.isatty()
if is_windows:
# noinspection PyUnresolvedReferences
from colorama.initialise import wrap_stream
stdout = wrap_stream(sys.stdout, convert=None,
strip=None, autoreset=True, wrap=True)
stderr = wrap_stream(sys.stderr, convert=None,
strip=None, autoreset=True, wrap=True)
else:
stdout = sys.stdout
stderr = sys.stderr
def __init__(self, **kwargs):
assert all(hasattr(type(self), attr)
for attr in kwargs.keys())
self.__dict__.update(**kwargs)
@property
def config(self):
if not hasattr(self, '_config'):
self._config = Config(directory=self.config_dir)
if self._config.is_new:
self._config.save()
else:
self._config.load()
return self._config
class HTTPMessage(object):
"""Abstract class for HTTP messages."""
def __init__(self, orig):
self._orig = orig
def iter_body(self, chunk_size):
"""Return an iterator over the body."""
raise NotImplementedError()
def iter_lines(self, chunk_size):
"""Return an iterator over the body yielding (`line`, `line_feed`)."""
raise NotImplementedError()
@property
def headers(self):
"""Return a `str` with the message's headers."""
raise NotImplementedError()
@property
def encoding(self):
"""Return a `str` with the message's encoding, if known."""
raise NotImplementedError()
@property
def body(self):
"""Return a `bytes` with the message's body."""
raise NotImplementedError()
@property
def content_type(self):
"""Return the message content type."""
return self._orig.headers.get('Content-Type', '')
class HTTPResponse(HTTPMessage):
"""A :class:`requests.models.Response` wrapper."""
def iter_body(self, chunk_size=1):
return self._orig.iter_content(chunk_size=chunk_size)
def iter_lines(self, chunk_size):
return ((line, b'\n') for line in self._orig.iter_lines(chunk_size))
#noinspection PyProtectedMember
@property
def headers(self):
original = self._orig.raw._original_response
status_line = 'HTTP/{version} {status} {reason}'.format(
version='.'.join(str(original.version)),
status=original.status,
reason=original.reason
)
headers = [status_line]
try:
# `original.msg` is a `http.client.HTTPMessage` on Python 3
# `_headers` is a 2-tuple
headers.extend(
'%s: %s' % header for header in original.msg._headers)
except AttributeError:
# and a `httplib.HTTPMessage` on Python 2.x
# `headers` is a list of `name: val<CRLF>`.
headers.extend(h.strip() for h in original.msg.headers)
return '\r\n'.join(headers)
@property
def encoding(self):
return self._orig.encoding or 'utf8'
@property
def body(self):
# Only now the response body is fetched.
# Shouldn't be touched unless the body is actually needed.
return self._orig.content
class HTTPRequest(HTTPMessage):
"""A :class:`requests.models.Request` wrapper."""
def iter_body(self, chunk_size):
yield self.body
def iter_lines(self, chunk_size):
yield self.body, b''
@property
def headers(self):
url = urlsplit(self._orig.url)
request_line = '{method} {path}{query} HTTP/1.1'.format(
method=self._orig.method,
path=url.path or '/',
query='?' + url.query if url.query else ''
)
headers = dict(self._orig.headers)
if 'Host' not in headers:
headers['Host'] = url.netloc
headers = ['%s: %s' % (name, value)
for name, value in headers.items()]
headers.insert(0, request_line)
return '\r\n'.join(headers).strip()
@property
def encoding(self):
return 'utf8'
@property
def body(self):
body = self._orig.body
if isinstance(body, str):
# Happens with JSON/form request data parsed from the command line.
body = body.encode('utf8')
return body or b''

526
httpie/output.py Normal file
View File

@ -0,0 +1,526 @@
"""Output streaming, processing and formatting.
"""
import json
import xml.dom.minidom
from functools import partial
from itertools import chain
import pygments
from pygments import token, lexer
from pygments.styles import get_style_by_name, STYLE_MAP
from pygments.lexers import get_lexer_for_mimetype, get_lexer_by_name
from pygments.formatters.terminal import TerminalFormatter
from pygments.formatters.terminal256 import Terminal256Formatter
from pygments.util import ClassNotFound
from .compat import is_windows
from .solarized import Solarized256Style
from .models import HTTPRequest, HTTPResponse, Environment
from .input import (OUT_REQ_BODY, OUT_REQ_HEAD,
OUT_RESP_HEAD, OUT_RESP_BODY)
# The default number of spaces to indent when pretty printing
DEFAULT_INDENT = 4
# Colors on Windows via colorama don't look that
# great and fruity seems to give the best result there.
AVAILABLE_STYLES = set(STYLE_MAP.keys())
AVAILABLE_STYLES.add('solarized')
DEFAULT_STYLE = 'solarized' if not is_windows else 'fruity'
BINARY_SUPPRESSED_NOTICE = (
b'\n'
b'+-----------------------------------------+\n'
b'| NOTE: binary data not shown in terminal |\n'
b'+-----------------------------------------+'
)
class BinarySuppressedError(Exception):
"""An error indicating that the body is binary and won't be written,
e.g., for terminal output)."""
message = BINARY_SUPPRESSED_NOTICE
###############################################################################
# Output Streams
###############################################################################
def write(stream, outfile, flush):
"""Write the output stream."""
try:
# Writing bytes so we use the buffer interface (Python 3).
buf = outfile.buffer
except AttributeError:
buf = outfile
for chunk in stream:
buf.write(chunk)
if flush:
outfile.flush()
def write_with_colors_win_py3(stream, outfile, flush):
"""Like `write`, but colorized chunks are written as text
directly to `outfile` to ensure it gets processed by colorama.
Applies only to Windows with Python 3 and colorized terminal output.
"""
color = b'\x1b['
encoding = outfile.encoding
for chunk in stream:
if color in chunk:
outfile.write(chunk.decode(encoding))
else:
outfile.buffer.write(chunk)
if flush:
outfile.flush()
def build_output_stream(args, env, request, response):
"""Build and return a chain of iterators over the `request`-`response`
exchange each of which yields `bytes` chunks.
"""
req_h = OUT_REQ_HEAD in args.output_options
req_b = OUT_REQ_BODY in args.output_options
resp_h = OUT_RESP_HEAD in args.output_options
resp_b = OUT_RESP_BODY in args.output_options
req = req_h or req_b
resp = resp_h or resp_b
output = []
Stream = get_stream_type(env, args)
if req:
output.append(Stream(
msg=HTTPRequest(request),
with_headers=req_h,
with_body=req_b))
if req_b and resp:
# Request/Response separator.
output.append([b'\n\n'])
if resp:
output.append(Stream(
msg=HTTPResponse(response),
with_headers=resp_h,
with_body=resp_b))
if env.stdout_isatty and resp_b:
# Ensure a blank line after the response body.
# For terminal output only.
output.append([b'\n\n'])
return chain(*output)
def get_stream_type(env, args):
"""Pick the right stream type based on `env` and `args`.
Wrap it in a partial with the type-specific args so that
we don't need to think what stream we are dealing with.
"""
if not env.stdout_isatty and not args.prettify:
Stream = partial(
RawStream,
chunk_size=RawStream.CHUNK_SIZE_BY_LINE
if args.stream
else RawStream.CHUNK_SIZE
)
elif args.prettify:
Stream = partial(
PrettyStream if args.stream else BufferedPrettyStream,
env=env,
processor=OutputProcessor(
env=env, groups=args.prettify, pygments_style=args.style),
)
else:
Stream = partial(EncodedStream, env=env)
return Stream
class BaseStream(object):
"""Base HTTP message output stream class."""
def __init__(self, msg, with_headers=True, with_body=True,
on_body_chunk_downloaded=None):
"""
:param msg: a :class:`models.HTTPMessage` subclass
:param with_headers: if `True`, headers will be included
:param with_body: if `True`, body will be included
"""
assert with_headers or with_body
self.msg = msg
self.with_headers = with_headers
self.with_body = with_body
self.on_body_chunk_downloaded = on_body_chunk_downloaded
def _get_headers(self):
"""Return the headers' bytes."""
return self.msg.headers.encode('ascii')
def _iter_body(self):
"""Return an iterator over the message body."""
raise NotImplementedError()
def __iter__(self):
"""Return an iterator over `self.msg`."""
if self.with_headers:
yield self._get_headers()
yield b'\r\n\r\n'
if self.with_body:
try:
for chunk in self._iter_body():
yield chunk
if self.on_body_chunk_downloaded:
self.on_body_chunk_downloaded(chunk)
except BinarySuppressedError as e:
if self.with_headers:
yield b'\n'
yield e.message
class RawStream(BaseStream):
"""The message is streamed in chunks with no processing."""
CHUNK_SIZE = 1024 * 100
CHUNK_SIZE_BY_LINE = 1
def __init__(self, chunk_size=CHUNK_SIZE, **kwargs):
super(RawStream, self).__init__(**kwargs)
self.chunk_size = chunk_size
def _iter_body(self):
return self.msg.iter_body(self.chunk_size)
class EncodedStream(BaseStream):
"""Encoded HTTP message stream.
The message bytes are converted to an encoding suitable for
`self.env.stdout`. Unicode errors are replaced and binary data
is suppressed. The body is always streamed by line.
"""
CHUNK_SIZE = 1
def __init__(self, env=Environment(), **kwargs):
super(EncodedStream, self).__init__(**kwargs)
if env.stdout_isatty:
# Use the encoding supported by the terminal.
output_encoding = getattr(env.stdout, 'encoding', None)
else:
# Preserve the message encoding.
output_encoding = self.msg.encoding
# Default to utf8 when unsure.
self.output_encoding = output_encoding or 'utf8'
def _iter_body(self):
for line, lf in self.msg.iter_lines(self.CHUNK_SIZE):
if b'\0' in line:
raise BinarySuppressedError()
yield line.decode(self.msg.encoding)\
.encode(self.output_encoding, 'replace') + lf
class PrettyStream(EncodedStream):
"""In addition to :class:`EncodedStream` behaviour, this stream applies
content processing.
Useful for long-lived HTTP responses that stream by lines
such as the Twitter streaming API.
"""
CHUNK_SIZE = 1
def __init__(self, processor, **kwargs):
super(PrettyStream, self).__init__(**kwargs)
self.processor = processor
def _get_headers(self):
return self.processor.process_headers(
self.msg.headers).encode(self.output_encoding)
def _iter_body(self):
for line, lf in self.msg.iter_lines(self.CHUNK_SIZE):
if b'\0' in line:
raise BinarySuppressedError()
yield self._process_body(line) + lf
def _process_body(self, chunk):
return (self.processor
.process_body(
content=chunk.decode(self.msg.encoding, 'replace'),
content_type=self.msg.content_type,
encoding=self.msg.encoding)
.encode(self.output_encoding, 'replace'))
class BufferedPrettyStream(PrettyStream):
"""The same as :class:`PrettyStream` except that the body is fully
fetched before it's processed.
Suitable regular HTTP responses.
"""
CHUNK_SIZE = 1024 * 10
def _iter_body(self):
# Read the whole body before prettifying it,
# but bail out immediately if the body is binary.
body = bytearray()
for chunk in self.msg.iter_body(self.CHUNK_SIZE):
if b'\0' in chunk:
raise BinarySuppressedError()
body.extend(chunk)
yield self._process_body(body)
###############################################################################
# Processing
###############################################################################
class HTTPLexer(lexer.RegexLexer):
"""Simplified HTTP lexer for Pygments.
It only operates on headers and provides a stronger contrast between
their names and values than the original one bundled with Pygments
(:class:`pygments.lexers.text import HttpLexer`), especially when
Solarized color scheme is used.
"""
name = 'HTTP'
aliases = ['http']
filenames = ['*.http']
tokens = {
'root': [
# Request-Line
(r'([A-Z]+)( +)([^ ]+)( +)(HTTP)(/)(\d+\.\d+)',
lexer.bygroups(
token.Name.Function,
token.Text,
token.Name.Namespace,
token.Text,
token.Keyword.Reserved,
token.Operator,
token.Number
)),
# Response Status-Line
(r'(HTTP)(/)(\d+\.\d+)( +)(\d{3})( +)(.+)',
lexer.bygroups(
token.Keyword.Reserved, # 'HTTP'
token.Operator, # '/'
token.Number, # Version
token.Text,
token.Number, # Status code
token.Text,
token.Name.Exception, # Reason
)),
# Header
(r'(.*?)( *)(:)( *)(.+)', lexer.bygroups(
token.Name.Attribute, # Name
token.Text,
token.Operator, # Colon
token.Text,
token.String # Value
))
]
}
class BaseProcessor(object):
"""Base, noop output processor class."""
enabled = True
def __init__(self, env=Environment(), **kwargs):
"""
:param env: an class:`Environment` instance
:param kwargs: additional keyword argument that some
processor might require.
"""
self.env = env
self.kwargs = kwargs
def process_headers(self, headers):
"""Return processed `headers`
:param headers: The headers as text.
"""
return headers
def process_body(self, content, content_type, subtype, encoding):
"""Return processed `content`.
:param content: The body content as text
:param content_type: Full content type, e.g., 'application/atom+xml'.
:param subtype: E.g. 'xml'.
:param encoding: The original content encoding.
"""
return content
class JSONProcessor(BaseProcessor):
"""JSON body processor."""
def process_body(self, content, content_type, subtype, encoding):
if subtype == 'json':
try:
# Indent the JSON data, sort keys by name, and
# avoid unicode escapes to improve readability.
content = json.dumps(json.loads(content),
sort_keys=True,
ensure_ascii=False,
indent=DEFAULT_INDENT)
except ValueError:
# Invalid JSON but we don't care.
pass
return content
class XMLProcessor(BaseProcessor):
"""XML body processor."""
# TODO: tests
def process_body(self, content, content_type, subtype, encoding):
if subtype == 'xml':
try:
# Pretty print the XML
doc = xml.dom.minidom.parseString(content.encode(encoding))
content = doc.toprettyxml(indent=' ' * DEFAULT_INDENT)
except xml.parsers.expat.ExpatError:
# Ignore invalid XML errors (skips attempting to pretty print)
pass
return content
class PygmentsProcessor(BaseProcessor):
"""A processor that applies syntax-highlighting using Pygments
to the headers, and to the body as well if its content type is recognized.
"""
def __init__(self, *args, **kwargs):
super(PygmentsProcessor, self).__init__(*args, **kwargs)
# Cache that speeds up when we process streamed body by line.
self.lexers_by_type = {}
if not self.env.colors:
self.enabled = False
return
try:
style = get_style_by_name(
self.kwargs.get('pygments_style', DEFAULT_STYLE))
except ClassNotFound:
style = Solarized256Style
if self.env.is_windows or self.env.colors == 256:
fmt_class = Terminal256Formatter
else:
fmt_class = TerminalFormatter
self.formatter = fmt_class(style=style)
def process_headers(self, headers):
return pygments.highlight(
headers, HTTPLexer(), self.formatter).strip()
def process_body(self, content, content_type, subtype, encoding):
try:
lexer = self.lexers_by_type.get(content_type)
if not lexer:
try:
lexer = get_lexer_for_mimetype(content_type)
except ClassNotFound:
lexer = get_lexer_by_name(subtype)
self.lexers_by_type[content_type] = lexer
except ClassNotFound:
pass
else:
content = pygments.highlight(content, lexer, self.formatter)
return content.strip()
class HeadersProcessor(BaseProcessor):
"""Sorts headers by name retaining relative order of multiple headers
with the same name.
"""
def process_headers(self, headers):
lines = headers.splitlines()
headers = sorted(lines[1:], key=lambda h: h.split(':')[0])
return '\r\n'.join(lines[:1] + headers)
class OutputProcessor(object):
"""A delegate class that invokes the actual processors."""
installed_processors = {
'format': [
HeadersProcessor,
JSONProcessor,
XMLProcessor
],
'colors': [
PygmentsProcessor
]
}
def __init__(self, groups, env=Environment(), **kwargs):
"""
:param env: a :class:`models.Environment` instance
:param groups: the groups of processors to be applied
:param kwargs: additional keyword arguments for processors
"""
self.processors = []
for group in groups:
for cls in self.installed_processors[group]:
processor = cls(env, **kwargs)
if processor.enabled:
self.processors.append(processor)
def process_headers(self, headers):
for processor in self.processors:
headers = processor.process_headers(headers)
return headers
def process_body(self, content, content_type, encoding):
# e.g., 'application/atom+xml'
content_type = content_type.split(';')[0]
# e.g., 'xml'
subtype = content_type.split('/')[-1].split('+')[-1]
for processor in self.processors:
content = processor.process_body(
content,
content_type,
subtype,
encoding
)
return content

View File

@ -0,0 +1,9 @@
from .base import AuthPlugin
from .manager import PluginManager
from .builtin import BasicAuthPlugin, DigestAuthPlugin
plugin_manager = PluginManager()
plugin_manager.register(BasicAuthPlugin)
plugin_manager.register(DigestAuthPlugin)

28
httpie/plugins/base.py Normal file
View File

@ -0,0 +1,28 @@
class AuthPlugin(object):
"""
Base auth plugin class.
See <https://github.com/jkbr/httpie-ntlm> for an example auth plugin.
"""
# The value that should be passed to --auth-type
# to use this auth plugin. Eg. "my-auth"
auth_type = None
# The name of the plugin, eg. "My auth".
name = None
# Optional short description. Will be be shown in the help
# under --auth-type.
description = None
# This be set automatically once the plugin has been loaded.
package_name = None
def get_auth(self, username, password):
"""
Return a ``requests.auth.AuthBase`` subclass instance.
"""
raise NotImplementedError()

26
httpie/plugins/builtin.py Normal file
View File

@ -0,0 +1,26 @@
import requests.auth
from .base import AuthPlugin
class BuiltinAuthPlugin(AuthPlugin):
package_name = '(builtin)'
class BasicAuthPlugin(BuiltinAuthPlugin):
name = 'Basic HTTP auth'
auth_type = 'basic'
def get_auth(self, username, password):
return requests.auth.HTTPBasicAuth(username, password)
class DigestAuthPlugin(BuiltinAuthPlugin):
name = 'Digest HTTP auth'
auth_type = 'digest'
def get_auth(self, username, password):
return requests.auth.HTTPDigestAuth(username, password)

35
httpie/plugins/manager.py Normal file
View File

@ -0,0 +1,35 @@
from pkg_resources import iter_entry_points
ENTRY_POINT_NAMES = [
'httpie.plugins.auth.v1'
]
class PluginManager(object):
def __init__(self):
self._plugins = []
def __iter__(self):
return iter(self._plugins)
def register(self, plugin):
self._plugins.append(plugin)
def get_auth_plugins(self):
return list(self._plugins)
def get_auth_plugin_mapping(self):
return dict((plugin.auth_type, plugin) for plugin in self)
def get_auth_plugin(self, auth_type):
return self.get_auth_plugin_mapping()[auth_type]
def load_installed_plugins(self):
for entry_point_name in ENTRY_POINT_NAMES:
for entry_point in iter_entry_points(entry_point_name):
plugin = entry_point.load()
plugin.package_name = entry_point.dist.key
self.register(entry_point.load())

View File

@ -1,68 +0,0 @@
import os
import json
import pygments
from pygments import token
from pygments.util import ClassNotFound
from pygments.lexers import get_lexer_for_mimetype
from pygments.formatters.terminal256 import Terminal256Formatter
from pygments.formatters.terminal import TerminalFormatter
from pygments.lexer import RegexLexer, bygroups
from pygments.styles import get_style_by_name, STYLE_MAP
from .pygson import JSONLexer
from . import solarized
DEFAULT_STYLE = 'solarized'
AVAILABLE_STYLES = [DEFAULT_STYLE] + list(STYLE_MAP.keys())
FORMATTER = (Terminal256Formatter
if '256color' in os.environ.get('TERM', '')
else TerminalFormatter)
class HTTPLexer(RegexLexer):
name = 'HTTP'
aliases = ['http']
filenames = ['*.http']
tokens = {
'root': [
(r'\s+', token.Text),
# Request-Line
(r'([A-Z]+\s+)(/.*?)(\s+HTTP/[\d.]+)', bygroups(
token.Keyword, token.String, token.Keyword)),
# Status-Line
(r'(HTTP/[\d.]+\s+)(\d+)(\s+.+)', bygroups(
token.Keyword, token.Number, token.String)),
# Header
(r'(.*?:)(.+)', bygroups(token.Name, token.Keyword))
]}
class PrettyHttp(object):
def __init__(self, style_name):
if style_name == 'solarized':
style = solarized.SolarizedStyle
else:
style = get_style_by_name(style_name)
self.formatter = FORMATTER(style=style)
def headers(self, content):
return pygments.highlight(content, HTTPLexer(), self.formatter)
def body(self, content, content_type):
lexer = None
content_type = content_type.split(';')[0]
if 'json' in content_type:
lexer = JSONLexer()
try:
# Indent the JSON data.
content = json.dumps(json.loads(content),
sort_keys=True, indent=4)
except Exception:
pass
if not lexer:
try:
lexer = get_lexer_for_mimetype(content_type)
except ClassNotFound:
return content
return pygments.highlight(content, lexer, self.formatter)

View File

@ -1,77 +0,0 @@
"""
JSON lexer by Norman Richards
It's already been merged into Pygments but not released yet,
so we are temporarily bundling it with HTTPie.
It can be removed once Pygments > 1.4 has been released.
See <https://github.com/jkbr/httpie/pull/25> for more details.
"""
import re
from pygments import token
from pygments.lexer import RegexLexer, include
class JSONLexer(RegexLexer):
name = 'JSON Lexer'
aliases = ['json']
filenames = ['*.json']
mimetypes = []
flags = re.DOTALL
tokens = {
'whitespace': [
(r'\s+', token.Text),
],
# represents a simple terminal value
'simplevalue':[
(r'(true|false|null)\b', token.Keyword.Constant),
(r'-?[0-9]+', token.Number.Integer),
(r'"(\\\\|\\"|[^"])*"', token.String.Double),
],
# the right hand side of an object, after the attribute name
'objectattribute': [
include('value'),
(r':', token.Punctuation),
# comma terminates the attribute but expects more
(r',', token.Punctuation, '#pop'),
# a closing bracket terminates the entire object, so pop twice
(r'}', token.Punctuation, ('#pop', '#pop')),
],
# a json object - { attr, attr, ... }
'objectvalue': [
include('whitespace'),
(r'"(\\\\|\\"|[^"])*"', token.Name.Tag, 'objectattribute'),
(r'}', token.Punctuation, '#pop'),
],
# json array - [ value, value, ... }
'arrayvalue': [
include('whitespace'),
include('value'),
(r',', token.Punctuation),
(r']', token.Punctuation, '#pop'),
],
# a json value - either a simple value or a complex value (object or array)
'value': [
include('whitespace'),
include('simplevalue'),
(r'{', token.Punctuation, 'objectvalue'),
(r'\[', token.Punctuation, 'arrayvalue'),
],
# the root of a json document would be a value
'root': [
include('value'),
],
}

153
httpie/sessions.py Normal file
View File

@ -0,0 +1,153 @@
"""Persistent, JSON-serialized sessions.
"""
import re
import os
import requests
from requests.cookies import RequestsCookieJar, create_cookie
from .compat import urlsplit
from .config import BaseConfigDict, DEFAULT_CONFIG_DIR
from httpie.plugins import plugin_manager
SESSIONS_DIR_NAME = 'sessions'
DEFAULT_SESSIONS_DIR = os.path.join(DEFAULT_CONFIG_DIR, SESSIONS_DIR_NAME)
VALID_SESSION_NAME_PATTERN = re.compile('^[a-zA-Z0-9_.-]+$')
# Request headers starting with these prefixes won't be stored in sessions.
# They are specific to each request.
# http://en.wikipedia.org/wiki/List_of_HTTP_header_fields#Requests
SESSION_IGNORED_HEADER_PREFIXES = ['Content-', 'If-']
def get_response(session_name, requests_kwargs, config_dir, args,
read_only=False):
"""Like `client.get_response`, but applies permanent
aspects of the session to the request.
"""
if os.path.sep in session_name:
path = os.path.expanduser(session_name)
else:
hostname = (
requests_kwargs['headers'].get('Host', None)
or urlsplit(requests_kwargs['url']).netloc.split('@')[-1]
)
assert re.match('^[a-zA-Z0-9_.:-]+$', hostname)
# host:port => host_port
hostname = hostname.replace(':', '_')
path = os.path.join(config_dir,
SESSIONS_DIR_NAME,
hostname,
session_name + '.json')
session = Session(path)
session.load()
request_headers = requests_kwargs.get('headers', {})
requests_kwargs['headers'] = dict(session.headers, **request_headers)
session.update_headers(request_headers)
if args.auth:
session.auth = {
'type': args.auth_type,
'username': args.auth.key,
'password': args.auth.value,
}
elif session.auth:
requests_kwargs['auth'] = session.auth
requests_session = requests.Session()
requests_session.cookies = session.cookies
try:
response = requests_session.request(**requests_kwargs)
except Exception:
raise
else:
# Existing sessions with `read_only=True` don't get updated.
if session.is_new or not read_only:
session.cookies = requests_session.cookies
session.save()
return response
class Session(BaseConfigDict):
helpurl = 'https://github.com/jkbr/httpie#sessions'
about = 'HTTPie session file'
def __init__(self, path, *args, **kwargs):
super(Session, self).__init__(*args, **kwargs)
self._path = path
self['headers'] = {}
self['cookies'] = {}
self['auth'] = {
'type': None,
'username': None,
'password': None
}
def _get_path(self):
return self._path
def update_headers(self, request_headers):
"""
Update the session headers with the request ones while ignoring
certain name prefixes.
:type request_headers: dict
"""
for name, value in request_headers.items():
if name == 'User-Agent' and value.startswith('HTTPie/'):
continue
for prefix in SESSION_IGNORED_HEADER_PREFIXES:
if name.lower().startswith(prefix.lower()):
break
else:
self['headers'][name] = value
@property
def headers(self):
return self['headers']
@property
def cookies(self):
jar = RequestsCookieJar()
for name, cookie_dict in self['cookies'].items():
jar.set_cookie(create_cookie(
name, cookie_dict.pop('value'), **cookie_dict))
jar.clear_expired_cookies()
return jar
@cookies.setter
def cookies(self, jar):
"""
:type jar: CookieJar
"""
# http://docs.python.org/2/library/cookielib.html#cookie-objects
stored_attrs = ['value', 'path', 'secure', 'expires']
self['cookies'] = {}
for cookie in jar:
self['cookies'][cookie.name] = dict(
(attname, getattr(cookie, attname))
for attname in stored_attrs
)
@property
def auth(self):
auth = self.get('auth', None)
if not auth or not auth['type']:
return
auth_plugin = plugin_manager.get_auth_plugin(auth['type'])()
return auth_plugin.get_auth(auth['username'], auth['password'])
@auth.setter
def auth(self, auth):
assert set(['type', 'username', 'password']) == set(auth.keys())
self['auth'] = auth

View File

@ -1,73 +1,57 @@
"""
A Pygments_ style based on the dark background variant of Solarized_.
.. _Pygments: http://pygments.org/
.. _Solarized: http://ethanschoonover.com/solarized
Copyright (c) 2011 Hank Gay
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
# -*- coding: utf-8 -*-
"""
solarized256
------------
A Pygments style inspired by Solarized's 256 color mode.
:copyright: (c) 2011 by Hank Gay, (c) 2012 by John Mastro.
:license: BSD, see LICENSE for more details.
"""
from pygments.style import Style
from pygments.token import Token, Comment, Name, Keyword, Generic, Number, Operator, String
from pygments.token import Token, Comment, Name, Keyword, Generic, Number, \
Operator, String
BASE03 = "#1c1c1c"
BASE02 = "#262626"
BASE01 = "#4e4e4e"
BASE00 = "#585858"
BASE0 = "#808080"
BASE1 = "#8a8a8a"
BASE2 = "#d7d7af"
BASE3 = "#ffffd7"
YELLOW = "#af8700"
ORANGE = "#d75f00"
RED = "#af0000"
MAGENTA = "#af005f"
VIOLET = "#5f5faf"
BLUE = "#0087ff"
CYAN = "#00afaf"
GREEN = "#5f8700"
BASE03 = '#002B36'
BASE02 = '#073642'
BASE01 = '#586E75'
BASE00 = '#657B83'
BASE0 = '#839496'
BASE1 = '#93A1A1'
BASE2 = '#EEE8D5'
BASE3 = '#FDF6E3'
YELLOW = '#B58900'
ORANGE = '#CB4B16'
RED = '#DC322F'
MAGENTA = '#D33682'
VIOLET = '#6C71C4'
BLUE = '#268BD2'
CYAN = '#2AA198'
GREEN = '#859900'
class SolarizedStyle(Style):
class Solarized256Style(Style):
background_color = BASE03
styles = {
Keyword: GREEN,
Keyword.Constant: ORANGE,
Keyword.Declaration: BLUE,
#Keyword.Namespace
Keyword.Namespace: ORANGE,
#Keyword.Pseudo
Keyword.Reserved: BLUE,
Keyword.Type: RED,
#Name
Name.Attribute: BASE1,
Name.Builtin: YELLOW,
Name.Builtin: BLUE,
Name.Builtin.Pseudo: BLUE,
Name.Class: BLUE,
Name.Constant: ORANGE,
Name.Decorator: BLUE,
Name.Entity: ORANGE,
Name.Exception: ORANGE,
Name.Exception: YELLOW,
Name.Function: BLUE,
#Name.Label
#Name.Namespace
@ -83,10 +67,10 @@ class SolarizedStyle(Style):
String: CYAN,
String.Backtick: BASE01,
String.Char: CYAN,
String.Doc: BASE1,
String.Doc: CYAN,
#String.Double
String.Escape: ORANGE,
String.Heredoc: BASE1,
String.Escape: RED,
String.Heredoc: CYAN,
#String.Interpol
#String.Other
String.Regex: RED,
@ -99,8 +83,8 @@ class SolarizedStyle(Style):
#Number.Integer.Long
#Number.Oct
Operator: GREEN,
#Operator.Word
Operator: BASE1,
Operator.Word: GREEN,
#Punctuation: ORANGE,

46
httpie/utils.py Normal file
View File

@ -0,0 +1,46 @@
from __future__ import division
def humanize_bytes(n, precision=2):
# Author: Doug Latornell
# Licence: MIT
# URL: http://code.activestate.com/recipes/577081/
"""Return a humanized string representation of a number of bytes.
Assumes `from __future__ import division`.
>>> humanize_bytes(1)
'1 byte'
>>> humanize_bytes(1024)
'1.0 kB'
>>> humanize_bytes(1024 * 123)
'123.0 kB'
>>> humanize_bytes(1024 * 12342)
'12.1 MB'
>>> humanize_bytes(1024 * 12342, 2)
'12.05 MB'
>>> humanize_bytes(1024 * 1234, 2)
'1.21 MB'
>>> humanize_bytes(1024 * 1234 * 1111, 2)
'1.31 GB'
>>> humanize_bytes(1024 * 1234 * 1111, 1)
'1.3 GB'
"""
abbrevs = [
(1 << 50, 'PB'),
(1 << 40, 'TB'),
(1 << 30, 'GB'),
(1 << 20, 'MB'),
(1 << 10, 'kB'),
(1, 'B')
]
if n == 1:
return '1 B'
for factor, suffix in abbrevs:
if n >= factor:
break
return '%.*f %s' % (precision, n / factor, suffix)

3
requirements-dev.txt Normal file
View File

@ -0,0 +1,3 @@
tox
git+git://github.com/kennethreitz/httpbin.git@7c96875e87a448f08fb1981e85eb79e77d592d98
docutils

View File

@ -1,31 +1,46 @@
import os
import sys
import codecs
from setuptools import setup
import httpie
if sys.argv[-1] == 'test':
os.system('python tests.py')
sys.exit()
status = os.system('python tests/tests.py')
sys.exit(1 if status > 127 else status)
requirements = ['requests>=0.10.1', 'Pygments>=1.4']
if sys.version_info[:2] in ((2, 6), (3, 1)):
# argparse has been added in Python 3.2 / 2.7
requirements = [
'requests>=2.0.0',
'Pygments>=1.5'
]
try:
#noinspection PyUnresolvedReferences
import argparse
except ImportError:
requirements.append('argparse>=1.2.1')
if 'win32' in str(sys.platform).lower():
# Terminal colors for Windows
requirements.append('colorama>=0.2.4')
def long_description():
with codecs.open('README.rst', encoding='utf8') as f:
return f.read()
setup(
name='httpie',
version=httpie.__version__,
description=httpie.__doc__.strip(),
long_description=open('README.rst').read(),
long_description=long_description(),
url='http://httpie.org/',
download_url='https://github.com/jkbr/httpie',
author=httpie.__author__,
author_email='jakub@roztocil.name',
license=httpie.__licence__,
packages=['httpie'],
packages=['httpie', 'httpie.plugins'],
entry_points={
'console_scripts': [
'http = httpie.__main__:main',
@ -48,5 +63,6 @@ setup(
'Topic :: System :: Networking',
'Topic :: Terminals',
'Topic :: Text Processing',
'Topic :: Utilities'
],
)

BIN
tests/fixtures/file.bin vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 KiB

1
tests/fixtures/file2.txt vendored Normal file
View File

@ -0,0 +1 @@
__test_file_content__

1673
tests/tests.py Normal file → Executable file

File diff suppressed because it is too large Load Diff

13
tox.ini Normal file
View File

@ -0,0 +1,13 @@
# Tox (http://tox.testrun.org/) is a tool for running tests
# in multiple virtualenvs. This configuration file will run the
# test suite on all supported python versions. To use it, "pip install tox"
# and then run "tox" from this directory.
[tox]
envlist = py26, py27, py33, pypy
[testenv]
commands = {envpython} setup.py test
[testenv:py26]
deps = argparse