Compare commits

...

313 Commits
0.3.1 ... 0.9.0

Author SHA1 Message Date
6c66d91f59 v0.0.9 2015-01-31 13:21:45 +01:00
ed6485498b README 2015-01-24 00:41:22 +01:00
59b6020105 Extended SSL documentation. 2015-01-24 00:22:31 +01:00
12f2d99bfd Added test client SSL certs 2015-01-23 23:56:08 +01:00
5fbafc18bc Added tests for client as well as server SSL certificate handling. 2015-01-23 23:55:03 +01:00
df07927843 --certkey is now --cert-key 2015-01-23 23:54:27 +01:00
d3d78afb6a Pypy3 (2.4.0) curses bug workaround. 2015-01-23 22:19:02 +01:00
25b1be7c8a Work around missing object_pairs_hook in Python 2.6 2015-01-23 22:04:42 +01:00
22c993bab8 Merge branch 'fix-268' of https://github.com/asnelzin/httpie into asnelzin-fix-268 2015-01-23 21:45:09 +01:00
b2ec4f797f Exit with 0 for --version and --help (closes #293). 2015-01-19 15:39:46 +01:00
a2b12f75ea Fixed and added test for JSON properties order. 2014-11-13 23:56:05 +03:00
0481957715 Fixed multiple uploads with the same field name
Closes #267
2014-10-20 14:41:48 +02:00
c301305a59 Cleanup. 2014-10-20 14:41:48 +02:00
2078ece95a Cleanup 2014-10-20 14:41:48 +02:00
43f7b84a1e Merge pull request #260 from brakhane/master
Fallback to JSON highlighting if subtype contains json
2014-09-25 06:27:17 +02:00
f1cd289d51 Fallback to JSON highlighting if subtype contains json
Some JSON based formats like JSON Home Documents[1] don't
use a '+json' suffix, but simply contain json in their
MIME type. Also, some servers might use (outdated)
types like 'application/x-json'.

The JSON formatter can already handle those cases,
but the highlighter was ignoring them.

This commit will let the highlighter choose the JSON
lexer if no other lexer could be found and the MIME subtype
contains 'json'

[1] http://tools.ietf.org/html/draft-nottingham-json-home-03
2014-09-25 00:10:06 +02:00
24f46ff3ef Changelog 2014-09-08 07:50:41 +02:00
afe521ef73 Merge remote-tracking branch 'origin/master' 2014-09-08 07:47:55 +02:00
58b51a8277 Improved terminal color depth detection via curses
Closes #244
2014-09-08 07:46:53 +02:00
6aa711c69f Removed pytest-xdist
The test suite is much less IO-bound now with the local httpbin
instance (via pytest-httpbin). Therefore, paralelization is not
as helpful.
2014-09-08 07:44:25 +02:00
d2d1023921 Merge pull request #249 from frewsxcv/patch-1
Enable testing on PyPy 3
2014-09-07 10:45:29 +02:00
b0effe07d9 Fixed --output=/dev/null on Linux
Closes #252
2014-09-07 10:22:21 +02:00
af873effb6 Changelog typo. 2014-09-05 18:40:28 +02:00
5084f18568 '\' only escapes separator characters in req-items
It makes easier to work with Windows paths.

Closes #253, #254
2014-09-05 18:36:23 +02:00
1035710956 Added RequestItems named tuple for convenience. 2014-09-05 07:51:35 +02:00
5d2b3f5552 Enable testing on PyPy 3 2014-08-15 00:03:27 -07:00
ca36f1de04 Handle empty passwords in URL credentials
Closes #242
2014-07-18 13:39:47 +02:00
0f96348fd1 Cleanup 2014-07-18 13:39:47 +02:00
2fd84ec1da Merge pull request #241 from ametaireau/patch-1
Add the hawk auth plugin
2014-07-17 07:58:08 +02:00
e3c83fca6f Add the hawk plugin 2014-07-17 00:48:56 +02:00
529f3bd9b6 Fixed python setup.py test 2014-06-28 19:52:10 +02:00
2a72ae23d5 Run tests against local httpbin instance via pytest-httpbin. 2014-06-28 16:38:41 +02:00
79329ed1c6 Mention "brew install httpie --HEAD". 2014-06-28 13:26:48 +02:00
040d981f00 Fixed custom Host
Closes #235
2014-06-28 13:24:14 +02:00
8c892edd4f PEP8 2014-06-28 13:09:04 +02:00
a02a1eb562 Fixed README formatting 2014-06-24 17:27:01 +02:00
5e556612d9 Added $ brew install httpie to README
https://twitter.com/jakubroztocil/status/481453834024550400

Thanks @insomniacslk!
2014-06-24 17:25:29 +02:00
f5904d92c3 Merge pull request #225 from rockymeza/docs_grep_fix
Fixed the order of args to grep in README.
2014-06-15 16:35:19 +02:00
541c75ed5c Fixed the order of args to grep in README. 2014-06-15 08:14:37 -06:00
8e170b059c Fixed tests. 2014-06-03 19:45:57 +02:00
b44bc0928f Merge pull request #222 from felixbuenemann/patch-1
Add info about SNI on Python 2.x to README
2014-05-26 15:37:11 +02:00
f283de6968 Add info about SNI on Python 2.x to README
This updates the HTTPS section of the README with instructions on how to get SNI working on Python 2.x.
2014-05-26 15:31:16 +02:00
77955c9837 Fixed --timeout
* Require requests >= 2.3.0
* Updated test_timeout_exit_status

Close #185.
2014-05-17 22:33:16 +02:00
4449da456a Merge pull request #220 from frewsxcv/patch-1
Add supported, relevant Python version classifers
2014-05-14 14:24:30 +02:00
f9b5b3a65d Added OSX to Travis CI config. 2014-05-14 14:00:26 +02:00
10f7fc163b Add supported, relevant Python version classifers 2014-05-12 17:36:09 -07:00
5743363ac9 Merge branch 'master' of github.com:jkbr/httpie 2014-05-12 19:16:15 +02:00
7036ec69ff Enable testing on Python 3.4 2014-05-12 19:16:04 +02:00
02c66e14df Update CONTRIBUTING.rst 2014-05-12 19:16:04 +02:00
ea8132b3d6 Update CONTRIBUTING.rst 2014-05-12 19:16:04 +02:00
e4c68063b9 Converted built-in formatters to formatter plugins.
Still work in progress and the API should be considered private for now.
2014-05-12 19:12:39 +02:00
9c2207844e Merge pull request #219 from frewsxcv/patch-1
Enable testing on Python 3.4
2014-05-12 08:02:17 +02:00
b51775bb06 Enable testing on Python 3.4 2014-05-11 20:09:47 -07:00
f26272f83f Update CONTRIBUTING.rst 2014-05-09 12:48:34 +01:00
81518f9315 Update CONTRIBUTING.rst 2014-05-09 12:46:33 +01:00
858555abb5 Make sure session and default headers play nice
Before: headers = default + args + session
Now:    headers = default + session + args

Fixes #180
2014-05-08 12:27:50 +01:00
3e1b62fb20 Fixed .rst syntax. 2014-05-05 21:17:41 +02:00
d9eca19b8f New URL. 2014-05-05 21:17:23 +02:00
5a989b6075 Fixed Makefile, added setup.cfg. 2014-04-28 13:27:02 +02:00
29a564ef56 Added wheel support
Should make installation via pip work on OSX Mavericks (#148).

Also added a nifty Makefile.
2014-04-28 13:25:47 +02:00
2aa53e4be3 Avoid “__init__.py” files in test directories.
As recommended here:

	https://pytest.org/latest/goodpractises.html
2014-04-28 11:29:41 +02:00
faec00fd99 Improve support for 'type/subtype+suffix' mime types in the colors output formatter.
E.g.:
* application/ld+json
* application/hal+json

Closes #189, #206
2014-04-28 10:08:03 +02:00
76ab8b84be Cleanup 2014-04-28 10:01:56 +02:00
14763e619d Travis coveralls. 2014-04-28 01:05:03 +02:00
0e6875bf83 Handle HTTP 0.9 in response when formatting version.
Closes #170
2014-04-28 00:08:20 +02:00
bd50a6adb1 Moved .directory from BaseConfigDict to Config.
Closes #200
2014-04-27 23:12:48 +02:00
f67a11c165 Debug appveyor 2014-04-27 22:20:23 +02:00
64b9a86c52 Debug appveyor 2014-04-27 22:15:21 +02:00
c8ae697eec Python 3.4 @ appveyor. 2014-04-27 22:14:11 +02:00
82e16c4f27 Debug appveyor 2014-04-27 22:10:24 +02:00
05db75bdb1 Modularized output, refactoring
Making it ready for output formatting plugin API.
2014-04-27 21:58:00 +02:00
c06598a0c4 Cleanup 2014-04-27 18:27:44 +02:00
18f3700b77 Fix appveyor.yml V. 2014-04-27 17:54:30 +02:00
d05063f019 Fix appveyor.yml IV. 2014-04-27 17:52:04 +02:00
7c3f8c021e Fix appveyor.yml III. 2014-04-27 17:50:54 +02:00
a95d8bb42d Fix appveyor.yml 2014-04-27 17:46:19 +02:00
411822d3b2 Fix appveyor.yml 2014-04-27 17:45:23 +02:00
bae8519e29 Python3.3 Windows CI 2014-04-27 17:38:12 +02:00
87806acc56 Cleanup 2014-04-26 23:06:39 +02:00
1169a3eb23 Fixed tests. 2014-04-26 20:14:46 +02:00
43bc6d0c98 Fixed and added tests for --verbose with unicode headers. 2014-04-26 20:10:15 +02:00
eca1ffaedb More unicode. 2014-04-26 19:47:14 +02:00
0bd218eab0 Cleanup 2014-04-26 19:32:08 +02:00
609950f327 Updated Travis icon URL. 2014-04-26 18:48:57 +02:00
bbc820bf2e Fixed fixture loading on Windows. 2014-04-26 18:41:28 +02:00
84a521a827 Added test_unicode_url_query_arg_item. 2014-04-26 18:23:13 +02:00
a3352af1d4 Added support and tests for unicode support in sessions. 2014-04-26 18:16:30 +02:00
e8a1c051f9 Changelog 2014-04-26 17:53:35 +02:00
3478cbd9ff More unicode tests. 2014-04-26 17:53:01 +02:00
77dcd6e919 Added unicode characters to json fixture. 2014-04-26 17:37:56 +02:00
467d126b6c Python 3 unicode fixes. 2014-04-26 17:35:26 +02:00
8ec32fe7f3 Fix tox config. 2014-04-26 16:50:31 +02:00
282cc455e3 Avoid "TypeError: keyword arguments must be strings" on Python 3.3. 2014-04-26 15:18:38 +02:00
56d33a8e51 Fix Windows branch. 2014-04-26 15:10:39 +02:00
15e62ad26d Implemented more robust unicode handling.
* Immediatelly convert all args from `bytes` to `str`.
* Added `Environment.stdin_encoding` and `Environment.stdout_encoding`
* Allow unicode characters in HTTP headers and basic auth credentials
  by encoding them using UTF8 instead of latin1 (#212).
2014-04-26 15:07:31 +02:00
5c29a4e551 Added windows build status icon to README. 2014-04-26 11:32:41 +02:00
0c45c7cb39 Disabled test_windows_colorized_output 2014-04-26 11:06:50 +02:00
8158fa8c45 Run tests in verbose mode. 2014-04-26 11:03:53 +02:00
5065c4f878 Updated appveyor.yml 2014-04-26 11:01:02 +02:00
e3af74da46 Don't used pytest-xdist with setup.py test 2014-04-26 10:59:46 +02:00
5c3d24ec09 Updated appveyor.yml 2014-04-26 10:49:40 +02:00
091a8b2692 Updated appveyor.yml 2014-04-26 10:46:08 +02:00
95a0884f95 Updated appveyor.yml 2014-04-26 10:41:57 +02:00
8fb1e106ee Updated appveyor.yml 2014-04-26 10:36:12 +02:00
78c83da721 Updated appveyor.yml 2014-04-26 10:33:13 +02:00
aeccac5cbd Updated appveyor.yml 2014-04-26 10:28:16 +02:00
e2dabbfaf7 Updated appveyor.yml 2014-04-26 10:26:29 +02:00
272e66bf37 Updated appveyor.yml 2014-04-26 10:22:17 +02:00
4a0d387f86 Updated appveyor.yml 2014-04-26 10:20:45 +02:00
6a86164510 Updated appveyor.yml 2014-04-26 10:14:57 +02:00
e1348da118 Updated appveyor.yml 2014-04-26 10:13:31 +02:00
0e1b651a1c Added appveyor.yml 2014-04-26 10:07:35 +02:00
631e332dad Cleanup 2014-04-25 13:57:33 +02:00
33422312c5 Cleanup 2014-04-25 13:52:43 +02:00
1d987c5b4d Improved session tests. 2014-04-25 13:50:44 +02:00
3c2de34285 Improved auth tests. 2014-04-25 13:10:01 +02:00
b10d973019 Removed unused import. 2014-04-25 12:53:02 +02:00
492ee392bd Cleanup 2014-04-25 12:42:50 +02:00
af4aa3a761 Test improvements. 2014-04-25 12:18:35 +02:00
27faf06327 Removed last dependencies on unittest. All tests are pytest-only. 2014-04-25 11:39:59 +02:00
f658d24c93 Parametrize test_docs.py. 2014-04-25 10:41:04 +02:00
ea42d32f69 Travis doesn't support Python 3.4 yet. 2014-04-25 09:47:35 +02:00
3f63133b7c Parallelized tests using pytest-xdist. 2014-04-24 21:36:03 +02:00
3f8a000847 Python 3.4 2014-04-24 20:08:28 +02:00
f02169ea71 Added Python 2.6 compatible OrderedDict
To preserver ordr of headers, parameters, etc.
2014-04-24 19:57:19 +02:00
e5d758e4ce More tests. 2014-04-24 19:32:55 +02:00
ce2169f4fe Added docstrings for utilities in tests.__init__. 2014-04-24 19:32:55 +02:00
bdea7be456 Added tests for --debug and --help. 2014-04-24 19:32:55 +02:00
887f70f595 Added CONTRIBUTING.rst. 2014-04-24 19:32:55 +02:00
3d079942f4 Finished pytest migration. 2014-04-24 19:32:55 +02:00
3cb124bba7 Cleanup
XX
2014-04-24 19:32:50 +02:00
6f28624134 Switched to @pytest.mark.skipif. 2014-04-24 15:17:23 +02:00
941c0a8c3c Moved fixture constants to tests.fixtures. 2014-04-24 15:17:04 +02:00
b880e996d0 Converted all unittest asserts to plain, pytest-powered asserts. 2014-04-24 14:58:15 +02:00
6071fff4af Refactored tests into smaller modules. 2014-04-24 14:07:31 +02:00
746a1899f3 Skip ExitStatusTest.test_timeout_exit_status until timeout gets fixed in requests. 2014-03-31 13:01:55 +02:00
bbbae3ae25 Fixed SessionTest.test_session_read_only. 2014-03-31 13:01:55 +02:00
e62620d4ad Merge pull request #208 from insyte/master
Update README.rst with pronunciation.
2014-03-25 10:19:32 +01:00
a2918d877d Update README.rst 2014-03-24 18:03:59 -05:00
733771fd9e Merge pull request #172 from unsignedint/master
process XML data before pretty-printing to trim whitespace
2014-03-18 19:44:16 +01:00
76ab6e49d5 Updated installation instructions. 2014-03-04 18:44:31 +01:00
c33775e785 Updated installation instructions. 2014-03-04 18:42:33 +01:00
09810d55ba Updated installation instructions. 2014-03-04 18:36:22 +01:00
29877bc8ad Updated installation instructions. 2014-03-04 18:24:32 +01:00
af6bda11af Removed Bitdeli badge. 2014-02-18 14:09:50 +01:00
b01906a45c Fixed ZeroDivisionError in download summary.
Closes #202
2014-02-18 13:06:18 +01:00
2c885b0981 Merge pull request #197 from matleh/master
add support for client SSL certificate and key
2014-02-12 14:46:59 +01:00
b3a34aba44 added --cert to CHANGELOG and matleh to AUTHORS 2014-02-12 11:23:31 +01:00
dd7197c60b document --cert and --certkey 2014-02-05 12:51:05 +01:00
a3aae12d9c rename -ssl-cert and --ssl-key to --cert and --certkey 2014-02-05 12:50:40 +01:00
d4363a560d rename existing_file to readable_file_arg and move to input 2014-01-29 18:02:06 +01:00
b9d7220b10 check --ssl-cert and --ssl-key to be files 2014-01-29 15:54:19 +01:00
14583a2efa add support for client SSL certificate and key 2014-01-28 16:16:48 +01:00
43cc3e7ddb Fixed changelog link. 2014-01-25 15:15:16 +01:00
f1224da526 v0.8.0 2014-01-25 15:11:38 +01:00
e0cc63c7eb Cleanup 2014-01-25 15:09:28 +01:00
52dd6adaa3 Updated README. 2014-01-25 15:04:15 +01:00
1aa77017d5 Catch UnicodeDecodeError when embedding file via =@ or :=@. 2014-01-25 14:57:19 +01:00
748a0a480d Update README.rst 2014-01-17 08:57:05 +01:00
01df344a07 Update README.rst 2014-01-17 08:56:24 +01:00
b1074ccb4f Merge pull request #191 from solidsnack/wip-no-auth-in-host-header
Expunge user:pass@... from Host header.
2014-01-08 02:28:19 -08:00
7a84163d1c Merge pull request #192 from thomasleveil/patch-1
fix typo
2014-01-08 02:27:29 -08:00
a31d552d1c fix typo 2014-01-07 14:04:13 +01:00
5a037b2e13 Expunge user:pass@... from Host header.
In verbose mode, the basic auth user and password would show up in colored
output reporting the Host header, as reported in
https://github.com/jkbr/httpie/issues/169
2014-01-06 19:12:33 +00:00
6af42b1827 Added Bitdeli badge. 2013-12-08 11:38:26 +01:00
bee10e5eed replace XML processor with ElementTree with custom indentation 2013-10-16 13:07:53 +13:00
bcdf194bae process XML data before pretty-printing to trim whitespace 2013-10-16 12:33:19 +13:00
0e267d8efa Added a link to the httpie-negotiate auth plugin by @ndzou II. 2013-10-09 23:46:55 +02:00
927acc283e Added a link to the httpie-negotiate auth plugin by @ndzou. 2013-10-09 23:44:55 +02:00
817165f5ff Merge pull request #171 from nlf/master
Allow :port style shorthand for localhost.
2013-10-09 13:22:30 -07:00
4fe3deb9d9 add self to authors, update changelog, and mention shorthand in --help output 2013-10-09 13:21:14 -07:00
9034546b80 tweak readme more 2013-10-09 11:37:05 -07:00
2c12fd99f9 tweak readme more 2013-10-09 11:36:01 -07:00
70eb97dece tweak readme to show http requests 2013-10-09 11:34:22 -07:00
8a52bef559 make shorthand parsing more robust, add unit tests and documentation 2013-10-09 11:32:41 -07:00
711168a899 allow :port style shorthand 2013-10-08 22:41:38 -07:00
81c99886fd Update --proxy examples to include URLs to work with Requests v2.0.0.. 2013-09-25 22:02:29 +02:00
2e535d8345 Fixed password prompt. 2013-09-25 00:17:50 +02:00
0bcd4d2fb0 Fixed a bytes/str issue for Python 3. 2013-09-25 00:00:17 +02:00
d5bc564e4f Allow embeding text (=@) and JSON (:=@) files content into request data fields. 2013-09-24 23:41:18 +02:00
54c5c3d82b 0.7.1 2013-09-24 21:57:29 +02:00
2a6514eb5d Update to requests 2.0.0
Closes #140.
2013-09-24 21:49:43 +02:00
22c2cc6465 Removed unused import. 2013-09-24 20:30:54 +02:00
2265edf05e Cleanup 2013-09-24 20:15:19 +02:00
87774acf5c Changelog 2013-09-24 20:09:23 +02:00
9d2ac5d8ad 0.7.0 2013-09-24 20:07:48 +02:00
3e4e1c72a4 Merge branch 'master' of github.com:jkbr/httpie 2013-09-24 19:51:06 +02:00
29f6b6a2a9 Improved Content-Disposition parsing for --download mode
Closes #168.
2013-09-24 19:50:37 +02:00
26b2d408e7 Merge pull request #167 from matt-hickford/master
Fix plugins ImportError
2013-09-23 02:13:14 -07:00
b5f180a5ee Fix plugins ImportError described at https://github.com/jkbr/httpie/issues/166#issuecomment-24905910 2013-09-23 09:54:06 +01:00
354aaa94bd Improved .netrc example formatting. 2013-09-22 15:20:50 +02:00
2ad4059f92 Improved .netrc example formatting. 2013-09-22 15:19:59 +02:00
5a6b65ecc6 Added link to httpie-oauth. 2013-09-22 15:10:50 +02:00
2acb303552 Added support for auth plugins. 2013-09-21 23:46:15 +02:00
f7b703b4bf Added --ignore-stdin
Closes #150
2013-08-23 10:57:17 +02:00
00de49f4c3 Cleanup 2013-08-18 00:59:10 +02:00
67496162fa Improved --help output. 2013-08-10 11:56:19 +02:00
8378ad3624 Try to import argparse before adding it to reqs. 2013-08-01 09:07:33 +02:00
f87884dd8d README 2013-08-01 08:46:37 +02:00
b671ee35e7 Merge pull request #153 from lorin/patch-1
Augment cookie example in README for multiple cookies
2013-07-31 07:52:22 -07:00
69247066dc Augment cookie example in README for multiple cookies
This change updates the README to show how to pass multiple cookies.
2013-07-31 10:29:38 -04:00
383dba524a Print error when download is interrupted by server
Close #147
2013-07-07 17:00:03 +02:00
60f09776a5 httpless outputs also response headers by default 2013-06-03 12:28:04 +02:00
48719aa70e README 2013-06-03 12:22:34 +02:00
809a461a26 v0.6.0 2013-06-03 12:19:43 +02:00
c3d550e930 Fixed headers tests; Require requests>=1.2.3. 2013-06-02 20:47:29 +02:00
172df162b3 Added XML formatting to CHANGELOG. 2013-06-02 20:27:58 +02:00
1bad62ab0e Handle unicode when formatting XML. 2013-06-02 20:25:36 +02:00
8d302f91f9 Merge branch 'master' of git://github.com/jargonjustin/httpie into jargonjustin-master 2013-06-02 20:14:51 +02:00
63b61bc811 Add custom Host example. 2013-05-20 15:31:02 +02:00
5af88756a6 Fixed download ETA for Python 2.6. 2013-05-14 12:49:29 +02:00
7f624e61b5 Use Thread instead of Timer for progress reporting. 2013-05-14 12:49:03 +02:00
6e848b3203 cleanup 2013-05-14 12:14:08 +02:00
8e112a6948 test_download_no_Content_Length 2013-05-13 15:35:12 +02:00
87c59ae561 Added anonymous sessions (--session=/file/path.json). 2013-05-13 14:47:44 +02:00
76eebeac2a 0.6.0-dev 2013-05-13 12:42:16 +02:00
5b9cbcb530 v0.5.1 2013-05-13 12:40:25 +02:00
8ad33d5f6a Changelog 2013-05-13 12:20:54 +02:00
86ac4cdb7b Changelog 2013-05-13 12:20:28 +02:00
e09b74021c Ignore Content-* and If-* request headers.
Those headers are not stored in sessions anymore.

Closes #141.
2013-05-13 11:54:49 +02:00
71e7061014 v0.5.0 2013-04-27 12:03:38 -03:00
bc756cb6a2 Cleanup 2013-04-27 11:57:13 -03:00
63ed4d32a7 Merge remote-tracking branch 'origin/master' 2013-04-17 13:52:02 -03:00
d1b91bfa9c Merge pull request #142 from capncodewash/netrc-example
Added example for .netrc usage (closes #139)
2013-04-17 09:46:50 -07:00
dac79a8efc Added example for .netrc usage (see issue #139 in upstream. 2013-04-17 16:32:55 +01:00
1fc8396c4b Stop the progres reporter thread on error. 2013-04-16 04:55:45 -03:00
6c3b983c18 Tests 2013-04-15 00:56:47 -03:00
cfa7199f0b Added a simple download test. 2013-04-13 15:34:31 -03:00
5a1177d57e Fixed downloads with no Content-Length. 2013-04-13 14:50:46 -03:00
c63a92f9b7 Cleanup 2013-04-12 22:02:34 -03:00
d17e02792b Fixed length progress bar. 2013-04-12 21:49:27 -03:00
fc4f70a900 Colorize stderr on Windows. 2013-04-12 17:15:21 -03:00
1681a4ddd0 TODOs 2013-04-12 15:27:26 -03:00
289e9b844e Fixed Content-Type retrieval for Python 3. 2013-04-12 14:07:21 -03:00
72cf7c2cb7 Fixed tests for Python 2.6. 2013-04-12 13:42:34 -03:00
4d84d77851 Cleanup 2013-04-12 13:09:57 -03:00
1b98505537 Validate download options before setting up streams. 2013-04-12 11:59:23 -03:00
d32acfe2fa Only use Range when already have a partial download. 2013-04-12 11:56:05 -03:00
e8d79c4d8c Docs fix. 2013-04-12 11:37:58 -03:00
38206e9e92 Cleanup 2013-04-12 11:26:42 -03:00
55d5e78324 --download docs (#104). 2013-04-12 11:06:03 -03:00
341272db1e Added support for output redirection with --download (#104). 2013-04-12 11:04:14 -03:00
464b7a36da Tests 2013-04-12 10:20:01 -03:00
9d043eb745 Used Content-Disposition filename (#104). 2013-04-12 10:19:49 -03:00
40bd8f65af Handle KeyboardInterrupt while --download'ing (#104). 2013-04-12 09:08:19 -03:00
347653b369 Performance and progress bar improvements.
#104
2013-04-12 08:59:33 -03:00
ebfce6fb93 Improved progress bar (#104). 2013-04-11 18:51:21 -03:00
674acfe2c2 Cleanup 2013-04-11 16:23:15 -03:00
7ccdece39f Cleanup 2013-04-11 04:00:41 -03:00
e53dcba03e Added Content-Range parsing tests.
#104
2013-04-11 03:49:01 -03:00
486657afa3 Improved Content-Range parsing.
#104
2013-04-11 03:24:59 -03:00
599bc0519f Download resume improvements.
- Set correct Range
- Validate respnse status
- Validate Content-Range

 #104
2013-04-11 02:29:10 -03:00
21613faa5a Progress bar update 2013-04-10 13:07:05 -03:00
36bc64e02f Cleanup. 2013-04-10 12:53:25 -03:00
6e5c696ac9 --json with no data sets Content-Type as well
Closes #137
2013-04-02 11:07:14 -03:00
9b2a293e6e Progress on --download. 2013-03-24 11:23:18 -03:00
b0dd463687 Corrected session info in the README. 2013-03-22 16:26:51 -03:00
bffaee13ff Formatting 2013-03-20 12:07:23 -03:00
30afcea72d Merge pull request #135 from Scorpil/master
Fixed PyPy cookie updating issue

Closes #132
2013-03-20 08:05:23 -07:00
631c54b711 Fixed PyPy cookie updating issue 2013-03-20 11:45:56 +02:00
99f82bbd32 Handle downloads with no Content-Length. 2013-03-07 13:32:48 -03:00
6f64b437b7 Fixed streaming (closes #133) 2013-03-07 12:42:29 -03:00
7774eac3df Fixed unique suffix placement for URLs with a file extension. 2013-03-03 22:35:01 -03:00
8e6c765be2 Initial --download implementation (#104).
Closes #127
2013-03-03 22:17:09 -03:00
f0c42cd089 v0.4.1 2013-02-26 14:37:09 +01:00
5c6cea79a1 Removed a reference to the removed httpie command
Closes #131
2013-02-26 14:31:52 +01:00
2bed81059a Updated README. 2013-02-22 14:04:27 +01:00
be0b2f21d2 v0.4.0 2013-02-22 13:52:50 +01:00
d97a610f7c Added new logo by @claudiatd 2013-02-22 13:51:37 +01:00
5cc5b13555 Removed the management command.
It means that:

    httpie session list
    httpie session edit
    ...

are gone.

It has never been part of a stable release, and since it wasn't
a very useful feature, it's beeing removed now to avoid feature creep.
2013-02-22 13:27:26 +01:00
3043f24733 .gitignore 2013-02-22 13:19:18 +01:00
093dab5896 Multiple headers TODO. 2013-02-22 13:18:18 +01:00
5f42a21cfb Simplified stored session cookie data. 2013-01-22 20:03:28 +01:00
4c45f0d91f Session name escaping. 2013-01-22 20:02:39 +01:00
d7ec7b2217 Fixing tests for Travis. 2013-01-04 03:19:38 +01:00
7817dfbbcc Fixing tests for Travis. 2013-01-04 03:09:21 +01:00
238b2e0441 Fixing tests for Travis. 2013-01-04 03:05:36 +01:00
a93d57b58b Fixed request/response session cookies.
Closes #113.
2013-01-04 02:59:05 +01:00
79c412064a Python 3.3 fixes. 2013-01-03 15:19:21 +01:00
0ae9d7af58 Compatibility with requests v1.0.4 (requests URL params). 2013-01-03 14:42:17 +01:00
80e317fe24 Added Python 3.3 to tox and travis conf. 2013-01-03 14:14:22 +01:00
1481749c22 Use urlsplit instead of urlparse.
Closes #118.
2013-01-03 14:12:27 +01:00
d84d94dd55 Clean up 2013-01-03 13:49:41 +01:00
1913b0d438 Merge branch 'master' of github.com:jkbr/httpie 2012-12-19 12:31:34 +01:00
fe16f425a9 Require Requests v1.0.3. 2012-12-19 12:31:01 +01:00
7ff71a7f10 Revert: Test Python 3.3 on Travis.
3.3 still not supported
2012-12-19 11:56:02 +01:00
4a37d10245 Test Python 3.3 on Travis. 2012-12-19 11:53:26 +01:00
e5edb66ae8 Requests v1.0: Fixed request body access. 2012-12-19 11:37:52 +01:00
2e57c080fd Pretty print XML 2012-12-17 13:21:38 -08:00
1766dd8291 Requests 1.0: session cookies. 2012-12-17 17:18:18 +01:00
675a8b17ad Merge branch 'master' of github.com:jkbr/httpie 2012-12-17 17:14:24 +01:00
69e26b8bc8 Requests 1.0: prefetch; default_headers. 2012-12-17 17:02:27 +01:00
291f520e0c Update README.rst 2012-12-17 12:26:57 +01:00
9ec328ff6f Session commands. 2012-12-11 12:54:34 +01:00
f2d59ba6bd Improved --check-status + HTTP error + stdout redirect warning. 2012-12-05 05:27:11 +01:00
53caf6ae72 Cleanup 2012-12-05 05:06:06 +01:00
8175366f27 PEP8 2012-12-05 04:39:56 +01:00
8190a7c0c6 Fixed httpie session list 2012-12-05 04:36:42 +01:00
4a615e762f Updated session docs. 2012-12-01 18:43:33 +01:00
7426b4b493 RST formatting. 2012-12-01 18:26:15 +01:00
2cdcadd9d5 Added docs for httpie. 2012-12-01 18:25:34 +01:00
18510a9396 Progress on httpie session *. 2012-12-01 18:16:00 +01:00
acf5f063c7 Typo 2012-12-01 16:52:23 +01:00
2cf379df78 Fixed README typo. 2012-12-01 16:20:16 +01:00
dd100c2cc4 Fixed -j & -v & redirected stdout. Closes #109. 2012-12-01 15:55:58 +01:00
444a9fa929 Added httpless to README. 2012-12-01 15:54:36 +01:00
4a24cd25b9 Clean up. 2012-12-01 15:20:14 +01:00
1c5fb89001 Output stream refactoring. 2012-11-09 15:49:23 +01:00
466e1dbedf Updated CHANGELOG (#100). 2012-11-08 22:39:28 +01:00
d87b2aa0e5 Added support for credentials in URL.
Closes #100 🍰
2012-11-08 22:29:54 +01:00
5d969852c7 Added --no-option's and made args more config-friendly. 2012-09-24 06:49:12 +02:00
bbc702fa11 Improved README. 2012-09-24 05:59:52 +02:00
e25d64a610 0.3.0 2012-09-21 05:50:01 +02:00
66 changed files with 5480 additions and 2936 deletions

4
.gitignore vendored
View File

@ -2,8 +2,10 @@ dist
httpie.egg-info
build
*.pyc
*.egg
.tox
README.html
.coverage
htmlcov
.idea
.DS_Store

View File

@ -1,9 +1,17 @@
# https://travis-ci.org/jakubroztocil/httpie
os:
- linux
- osx
language: python
python:
- 2.6
- 2.7
- pypy
- 3.2
script: python setup.py test
install:
- pip install . --use-mirrors
- 3.3
- 3.4
- pypy3
script:
- make
after_success:
- pip install python-coveralls
- coveralls

View File

@ -2,12 +2,13 @@
HTTPie authors
==============
* `Jakub Roztocil <https://github.com/jkbr>`_
* `Jakub Roztocil <https://github.com/jakubroztocil>`_
Patches and ideas
-----------------
* `Cláudia T. Delgado <https://github.com/claudiatd>`_ (logo)
* `Hank Gay <https://github.com/gthank>`_
* `Jake Basile <https://github.com/jakebasile>`_
* `Vladimir Berkutov <https://github.com/dair-targ>`_
@ -27,3 +28,7 @@ Patches and ideas
* `Tomek Wójcik <https://github.com/tomekwojcik>`_
* `Davey Shafik <https://github.com/dshafik>`_
* `cido <https://github.com/cido>`_
* `Justin Bonnar <https://github.com/jargonjustin>`_
* `Nathan LaFreniere <https://github.com/nlf>`_
* `Matthias Lehmann <https://github.com/matleh>`_
* `Dennis Brakhane <https://github.com/brakhane>`_

95
CONTRIBUTING.rst Normal file
View File

@ -0,0 +1,95 @@
Contributing to HTTPie
######################
Bug reports and code and documentation patches are greatly appretiated. You can
also help by using the development version of HTTPie and reporting any bugs you
might encounter.
Bug Reports
===========
**It's important that you provide the full command argument list
as well as the output of the failing command.**
Use the ``--debug`` flag and copy&paste both the command and its output
to your bug report, e.g.:
.. code-block:: bash
$ http --debug [arguments that trigger the error]
[complete output]
Contributing Code and Documentation
===================================
Before working on a new feature or a bug, please browse `existing issues`_
to see whether it has been previously discussed. If the change in question
is a bigger one, it's always good to discuss before your starting working on
it.
Development Environment
-----------------------
.. code-block:: bash
git clone https://github.com/<YOU>/httpie
cd httpie
git checkout -b my_topical_branch
# (Recommended: create a new virtualenv)
# Install dev. requirements and also HTTPie (in editable mode
# so that the `http' command will point to your working copy):
make
Making Changes
--------------
Please make sure your changes conform to `Style Guide for Python Code`_ (PEP8).
Tests
-----
Before opening a pull requests, please make sure the `test suite`_ passes
in all of the `supported Python environments`_. You should also **add tests
for any new features and bug fixes**.
HTTPie uses `pytest`_ and `Tox`_.
.. code-block:: bash
### Running all tests:
# Current Python
make test
# Current Python with coverage
make test-cover
# All the supported and available Pythons via Tox
make test-tox
### Running specific tests:
# Current Python
pytest tests/test_uploads.py
# All Pythons
tox -- tests/test_uploads.py --verbose
Don't forget to add yourself to `AUTHORS.rst`_.
.. _Tox: http://tox.testrun.org
.. _supported Python environments: https://github.com/jakubroztocil/httpie/blob/master/tox.ini
.. _existing issues: https://github.com/jakubroztocil/httpie/issues?state=open
.. _AUTHORS.rst: https://github.com/jakubroztocil/httpie/blob/master/AUTHORS.rst
.. _pytest: http://pytest.org/
.. _Style Guide for Python Code: http://python.org/dev/peps/pep-0008/
.. _test suite: https://github.com/jakubroztocil/httpie/tree/master/tests

67
Makefile Normal file
View File

@ -0,0 +1,67 @@
VERSION=$(shell grep __version__ httpie/__init__.py)
REQUIREMENTS="requirements-dev.txt"
TAG="\n\n\033[0;32m\#\#\# "
END=" \#\#\# \033[0m\n"
all: test
uninstall-httpie:
@echo $(TAG)Removing existing installation of HTTPie$(END)
- pip uninstall --yes httpie >/dev/null
! which http
@echo
uninstall-all: uninstall-httpie
- pip uninstall --yes -r $(REQUIREMENTS)
init: uninstall-httpie
@echo $(TAG)Installing dev requirements$(END)
pip install --upgrade -r $(REQUIREMENTS)
@echo $(TAG)Installing HTTPie$(END)
pip install --upgrade --editable .
@echo
test: init
@echo $(TAG)Running tests in on current Python with coverage $(END)
py.test --cov ./httpie --cov ./tests --doctest-modules --verbose ./httpie ./tests
@echo
test-tox: init
@echo $(TAG)Running tests on all Pythons via Tox$(END)
tox
@echo
test-dist: test-sdist test-bdist-wheel
@echo
test-sdist: clean uninstall-httpie
@echo $(TAG)Testing sdist build an installation$(END)
python setup.py sdist
pip install --force-reinstall --upgrade dist/*.gz
which http
@echo
test-bdist-wheel: clean uninstall-httpie
@echo $(TAG)Testing wheel build an installation$(END)
python setup.py bdist_wheel
pip install --force-reinstall --upgrade dist/*.whl
which http
@echo
# This tests everything, even this Makefile.
test-all: uninstall-all clean init test test-tox test-dist
publish: test-all
@echo $(TAG)Testing wheel build an installation$(END)
@echo "$(VERSION)"
@echo "$(VERSION)" | grep -q "dev" && echo "!!!Not publishing dev version!!!" && exit 1
python setup.py register
python setup.py sdist upload
python setup.py bdist_wheel upload
@echo
clean:
@echo $(TAG)Cleaning up$(END)
rm -rf .tox *.egg dist build .coverage
find . -name '__pycache__' -delete -print -o -name '*.pyc' -delete -print
@echo

File diff suppressed because it is too large Load Diff

17
appveyor.yml Normal file
View File

@ -0,0 +1,17 @@
# https://ci.appveyor.com/project/jakubroztocil/httpie
build: false
environment:
matrix:
- PYTHON: "C:/Python27"
- PYTHON: "C:/Python34"
init:
- "ECHO %PYTHON%"
- ps: "ls C:/Python*"
install:
- ps: (new-object net.webclient).DownloadFile('https://raw.github.com/pypa/pip/master/contrib/get-pip.py', 'C:/get-pip.py')
- "%PYTHON%/python.exe C:/get-pip.py"
- "%PYTHON%/Scripts/pip.exe install -e ."
test_script:
- "%PYTHON%/Scripts/pip.exe --version"
- "%PYTHON%/Scripts/http.exe --debug"
- "%PYTHON%/python.exe setup.py test"

View File

@ -1,13 +1,14 @@
"""
HTTPie - cURL for humans.
HTTPie - a CLI, cURL-like tool for humans.
"""
__author__ = 'Jakub Roztocil'
__version__ = '0.2.8-alpha'
__version__ = '0.9.0'
__licence__ = 'BSD'
class EXIT:
class ExitStatus:
"""Exit status code constants."""
OK = 0
ERROR = 1
ERROR_TIMEOUT = 2

View File

@ -3,207 +3,291 @@
NOTE: the CLI interface may change before reaching v1.0.
"""
from argparse import FileType, OPTIONAL, ZERO_OR_MORE, SUPPRESS
from textwrap import dedent, wrap
#noinspection PyCompatibility
from argparse import (RawDescriptionHelpFormatter, FileType,
OPTIONAL, ZERO_OR_MORE, SUPPRESS)
from requests.compat import is_windows
from . import __doc__
from . import __version__
from .config import DEFAULT_CONFIG_DIR
from .output import AVAILABLE_STYLES, DEFAULT_STYLE
from .input import (Parser, AuthCredentialsArgType, KeyValueArgType,
SEP_PROXY, SEP_CREDENTIALS, SEP_GROUP_ITEMS,
OUT_REQ_HEAD, OUT_REQ_BODY, OUT_RESP_HEAD,
OUT_RESP_BODY, OUTPUT_OPTIONS,
PRETTY_MAP, PRETTY_STDOUT_TTY_ONLY)
from httpie import __doc__, __version__
from httpie.plugins.builtin import BuiltinAuthPlugin
from httpie.plugins import plugin_manager
from httpie.sessions import DEFAULT_SESSIONS_DIR
from httpie.output.formatters.colors import AVAILABLE_STYLES, DEFAULT_STYLE
from httpie.input import (Parser, AuthCredentialsArgType, KeyValueArgType,
SEP_PROXY, SEP_CREDENTIALS, SEP_GROUP_ALL_ITEMS,
OUT_REQ_HEAD, OUT_REQ_BODY, OUT_RESP_HEAD,
OUT_RESP_BODY, OUTPUT_OPTIONS,
OUTPUT_OPTIONS_DEFAULT, PRETTY_MAP,
PRETTY_STDOUT_TTY_ONLY, SessionNameValidator,
readable_file_arg)
def _(text):
"""Normalize whitespace."""
return ' '.join(text.strip().split())
class HTTPieHelpFormatter(RawDescriptionHelpFormatter):
"""A nicer help formatter.
Help for arguments can be indented and contain new lines.
It will be de-dented and arguments in the help
will be separated by a blank line for better readability.
"""
def __init__(self, max_help_position=6, *args, **kwargs):
# A smaller indent for args help.
kwargs['max_help_position'] = max_help_position
super(HTTPieHelpFormatter, self).__init__(*args, **kwargs)
def _split_lines(self, text, width):
text = dedent(text).strip() + '\n\n'
return text.splitlines()
parser = Parser(
formatter_class=HTTPieHelpFormatter,
description='%s <http://httpie.org>' % __doc__.strip(),
epilog=_('''
Suggestions and bug reports are greatly appreciated:
https://github.com/jkbr/httpie/issues
''')
epilog=dedent("""
For every --OPTION there is also a --no-OPTION that reverts OPTION
to its default value.
Suggestions and bug reports are greatly appreciated:
https://github.com/jakubroztocil/httpie/issues
""")
)
###############################################################################
#######################################################################
# Positional arguments.
###############################################################################
#######################################################################
positional = parser.add_argument_group(
title='Positional arguments',
description=_('''
These arguments come after any flags and in the
order they are listed here. Only URL is required.'''
)
title='Positional Arguments',
description=dedent("""
These arguments come after any flags and in the order they are listed here.
Only URL is required.
""")
)
positional.add_argument(
'method', metavar='METHOD',
'method',
metavar='METHOD',
nargs=OPTIONAL,
default=None,
help=_('''
The HTTP method to be used for the request
(GET, POST, PUT, DELETE, PATCH, ...).
If this argument is omitted, then HTTPie
will guess the HTTP method. If there is some
data to be sent, then it will be POST, otherwise GET.
''')
help="""
The HTTP method to be used for the request (GET, POST, PUT, DELETE, ...).
This argument can be omitted in which case HTTPie will use POST if there
is some data to be sent, otherwise GET:
$ http example.org # => GET
$ http example.org hello=world # => POST
"""
)
positional.add_argument(
'url', metavar='URL',
help=_('''
The protocol defaults to http:// if the
URL does not include one.
''')
'url',
metavar='URL',
help="""
The scheme defaults to 'http://' if the URL does not include one.
You can also use a shorthand for localhost
$ http :3000 # => http://localhost:3000
$ http :/foo # => http://localhost/foo
"""
)
positional.add_argument(
'items', metavar='REQUEST ITEM',
'items',
metavar='REQUEST_ITEM',
nargs=ZERO_OR_MORE,
type=KeyValueArgType(*SEP_GROUP_ITEMS),
help=_('''
A key-value pair whose type is defined by the
separator used. It can be an HTTP header (header:value),
a data field to be used in the request body (field_name=value),
a raw JSON data field (field_name:=value),
a query parameter (name==value),
or a file field (field_name@/path/to/file).
You can use a backslash to escape a colliding
separator in the field name.
''')
type=KeyValueArgType(*SEP_GROUP_ALL_ITEMS),
help=r"""
Optional key-value pairs to be included in the request. The separator used
determines the type:
':' HTTP headers:
Referer:http://httpie.org Cookie:foo=bar User-Agent:bacon/1.0
'==' URL parameters to be appended to the request URI:
search==httpie
'=' Data fields to be serialized into a JSON object (with --json, -j)
or form data (with --form, -f):
name=HTTPie language=Python description='CLI HTTP client'
':=' Non-string JSON data fields (only with --json, -j):
awesome:=true amount:=42 colors:='["red", "green", "blue"]'
'@' Form file fields (only with --form, -f):
cs@~/Documents/CV.pdf
'=@' A data field like '=', but takes a file path and embeds its content:
essay=@Documents/essay.txt
':=@' A raw JSON field like ':=', but takes a file path and embeds its content:
package:=@./package.json
You can use a backslash to escape a colliding separator in the field name:
field-name-with\:colon=value
"""
)
###############################################################################
#######################################################################
# Content type.
###############################################################################
#######################################################################
content_type = parser.add_argument_group(
title='Predefined content types',
title='Predefined Content Types',
description=None
).add_mutually_exclusive_group(required=False)
)
content_type.add_argument(
'--json', '-j', action='store_true',
help=_('''
(default) Data items from the command
line are serialized as a JSON object.
The Content-Type and Accept headers
are set to application/json (if not specified).
''')
'--json', '-j',
action='store_true',
help="""
(default) Data items from the command line are serialized as a JSON object.
The Content-Type and Accept headers are set to application/json
(if not specified).
"""
)
content_type.add_argument(
'--form', '-f', action='store_true',
help=_('''
Data items from the command line are serialized as form fields.
The Content-Type is set to application/x-www-form-urlencoded
(if not specified).
The presence of any file fields results
in a multipart/form-data request.
''')
'--form', '-f',
action='store_true',
help="""
Data items from the command line are serialized as form fields.
The Content-Type is set to application/x-www-form-urlencoded (if not
specified). The presence of any file fields results in a
multipart/form-data request.
"""
)
###############################################################################
#######################################################################
# Output processing
###############################################################################
#######################################################################
output_processing = parser.add_argument_group(title='Output processing')
output_processing = parser.add_argument_group(title='Output Processing')
output_processing.add_argument(
'--output', '-o', type=FileType('w+b'),
metavar='FILE',
help= SUPPRESS if not is_windows else _(
'''
Save output to FILE.
This option is a replacement for piping output to FILE,
which would on Windows result in corrupted data
being saved.
'--pretty',
dest='prettify',
default=PRETTY_STDOUT_TTY_ONLY,
choices=sorted(PRETTY_MAP.keys()),
help="""
Controls output processing. The value can be "none" to not prettify
the output (default for redirected output), "all" to apply both colors
and formatting (default for terminal output), "colors", or "format".
'''
"""
)
output_processing.add_argument(
'--style', '-s',
dest='style',
metavar='STYLE',
default=DEFAULT_STYLE,
choices=AVAILABLE_STYLES,
help="""
Output coloring style (default is "{default}"). One of:
{available}
For this option to work properly, please make sure that the $TERM
environment variable is set to "xterm-256color" or similar
(e.g., via `export TERM=xterm-256color' in your ~/.bashrc).
""".format(
default=DEFAULT_STYLE,
available='\n'.join(
'{0}{1}'.format(8*' ', line.strip())
for line in wrap(', '.join(sorted(AVAILABLE_STYLES)), 60)
).rstrip(),
)
)
output_processing.add_argument(
'--pretty', dest='prettify', default=PRETTY_STDOUT_TTY_ONLY,
choices=sorted(PRETTY_MAP.keys()),
help=_('''
Controls output processing. The value can be "none" to not prettify
the output (default for redirected output), "all" to apply both colors
and formatting
(default for terminal output), "colors", or "format".
''')
)
output_processing.add_argument(
'--style', '-s', dest='style', default=DEFAULT_STYLE, metavar='STYLE',
choices=AVAILABLE_STYLES,
help=_('''
Output coloring style. One of %s. Defaults to "%s".
For this option to work properly, please make sure that the
$TERM environment variable is set to "xterm-256color" or similar
(e.g., via `export TERM=xterm-256color' in your ~/.bashrc).
''') % (', '.join(sorted(AVAILABLE_STYLES)), DEFAULT_STYLE)
)
###############################################################################
#######################################################################
# Output options
###############################################################################
output_options = parser.add_argument_group(title='Output options')
#######################################################################
output_options = parser.add_argument_group(title='Output Options')
output_print = output_options.add_mutually_exclusive_group(required=False)
output_print.add_argument('--print', '-p', dest='output_options',
output_options.add_argument(
'--print', '-p',
dest='output_options',
metavar='WHAT',
help=_('''
String specifying what the output should contain:
"{request_headers}" stands for the request headers, and
"{request_body}" for the request body.
"{response_headers}" stands for the response headers and
"{response_body}" for response the body.
The default behaviour is "hb" (i.e., the response
headers and body is printed), if standard output is not redirected.
If the output is piped to another program or to a file,
then only the body is printed by default.
'''.format(
request_headers=OUT_REQ_HEAD,
request_body=OUT_REQ_BODY,
response_headers=OUT_RESP_HEAD,
response_body=OUT_RESP_BODY,
))
help="""
String specifying what the output should contain:
'{req_head}' request headers
'{req_body}' request body
'{res_head}' response headers
'{res_body}' response body
The default behaviour is '{default}' (i.e., the response headers and body
is printed), if standard output is not redirected. If the output is piped
to another program or to a file, then only the response body is printed
by default.
"""
.format(
req_head=OUT_REQ_HEAD,
req_body=OUT_REQ_BODY,
res_head=OUT_RESP_HEAD,
res_body=OUT_RESP_BODY,
default=OUTPUT_OPTIONS_DEFAULT,
)
)
output_print.add_argument(
'--verbose', '-v', dest='output_options',
action='store_const', const=''.join(OUTPUT_OPTIONS),
help=_('''
Print the whole request as well as the response.
Shortcut for --print={0}.
'''.format(''.join(OUTPUT_OPTIONS)))
output_options.add_argument(
'--verbose', '-v',
dest='output_options',
action='store_const',
const=''.join(OUTPUT_OPTIONS),
help="""
Print the whole request as well as the response. Shortcut for --print={0}.
"""
.format(''.join(OUTPUT_OPTIONS))
)
output_print.add_argument(
'--headers', '-h', dest='output_options',
action='store_const', const=OUT_RESP_HEAD,
help=_('''
Print only the response headers.
Shortcut for --print={0}.
'''.format(OUT_RESP_HEAD))
output_options.add_argument(
'--headers', '-h',
dest='output_options',
action='store_const',
const=OUT_RESP_HEAD,
help="""
Print only the response headers. Shortcut for --print={0}.
"""
.format(OUT_RESP_HEAD)
)
output_print.add_argument(
'--body', '-b', dest='output_options',
action='store_const', const=OUT_RESP_BODY,
help=_('''
Print only the response body.
Shortcut for --print={0}.
'''.format(OUT_RESP_BODY))
output_options.add_argument(
'--body', '-b',
dest='output_options',
action='store_const',
const=OUT_RESP_BODY,
help="""
Print only the response body. Shortcut for --print={0}.
"""
.format(OUT_RESP_BODY)
)
output_options.add_argument('--stream', '-S', action='store_true', default=False,
help=_('''
output_options.add_argument(
'--stream', '-S',
action='store_true',
default=False,
help="""
Always stream the output by line, i.e., behave like `tail -f'.
Without --stream and with --pretty (either set or implied),
@ -215,138 +299,272 @@ output_options.add_argument('--stream', '-S', action='store_true', default=False
It is useful also without --pretty: It ensures that the output is flushed
more often and in smaller chunks.
'''
))
"""
)
output_options.add_argument(
'--output', '-o',
type=FileType('a+b'),
dest='output_file',
metavar='FILE',
help="""
Save output to FILE. If --download is set, then only the response body is
saved to the file. Other parts of the HTTP exchange are printed to stderr.
"""
)
output_options.add_argument(
'--download', '-d',
action='store_true',
default=False,
help="""
Do not print the response body to stdout. Rather, download it and store it
in a file. The filename is guessed unless specified with --output
[filename]. This action is similar to the default behaviour of wget.
"""
)
output_options.add_argument(
'--continue', '-c',
dest='download_resume',
action='store_true',
default=False,
help="""
Resume an interrupted download. Note that the --output option needs to be
specified as well.
"""
)
###############################################################################
#######################################################################
# Sessions
###############################################################################
#######################################################################
sessions = parser.add_argument_group(title='Sessions')\
.add_mutually_exclusive_group(required=False)
session_name_validator = SessionNameValidator(
'Session name contains invalid characters.'
)
sessions.add_argument(
'--session', metavar='SESSION_NAME',
help=_('''
Create, or reuse and update a session.
Withing a session, custom headers, auth credential, as well as any
cookies sent by the server persist between requests.
You can use the `httpie' management command to manipulate
and inspect existing sessions. See `httpie --help'.
''')
'--session',
metavar='SESSION_NAME_OR_PATH',
type=session_name_validator,
help="""
Create, or reuse and update a session. Within a session, custom headers,
auth credential, as well as any cookies sent by the server persist between
requests.
Session files are stored in:
{session_dir}/<HOST>/<SESSION_NAME>.json.
"""
.format(session_dir=DEFAULT_SESSIONS_DIR)
)
sessions.add_argument(
'--session-read-only', metavar='SESSION_NAME',
help=_('''
Create or reuse a session, but do not update it once saved.
''')
'--session-read-only',
metavar='SESSION_NAME_OR_PATH',
type=session_name_validator,
help="""
Create or read a session without updating it form the request/response
exchange.
"""
)
###############################################################################
#######################################################################
# Authentication
###############################################################################
#######################################################################
# ``requests.request`` keyword arguments.
auth = parser.add_argument_group(title='Authentication')
auth.add_argument(
'--auth', '-a', metavar='USER[:PASS]',
'--auth', '-a',
metavar='USER[:PASS]',
type=AuthCredentialsArgType(SEP_CREDENTIALS),
help=_('''
If only the username is provided (-a username),
HTTPie will prompt for the password.
'''),
help="""
If only the username is provided (-a username), HTTPie will prompt
for the password.
""",
)
_auth_plugins = plugin_manager.get_auth_plugins()
auth.add_argument(
'--auth-type', choices=['basic', 'digest'], default='basic',
help=_('''
The authentication mechanism to be used.
Defaults to "basic".
''')
'--auth-type',
choices=[plugin.auth_type for plugin in _auth_plugins],
default=_auth_plugins[0].auth_type,
help="""
The authentication mechanism to be used. Defaults to "{default}".
{types}
"""
.format(default=_auth_plugins[0].auth_type, types='\n '.join(
'"{type}": {name}{package}{description}'.format(
type=plugin.auth_type,
name=plugin.name,
package=(
'' if issubclass(plugin, BuiltinAuthPlugin)
else ' (provided by %s)' % plugin.package_name
),
description=(
'' if not plugin.description else
'\n ' + ('\n '.join(wrap(plugin.description)))
)
)
for plugin in _auth_plugins
)),
)
#######################################################################
# Network
#############################################
#######################################################################
network = parser.add_argument_group(title='Network')
network.add_argument(
'--proxy', default=[], action='append', metavar='PROTOCOL:HOST',
'--proxy',
default=[],
action='append',
metavar='PROTOCOL:PROXY_URL',
type=KeyValueArgType(SEP_PROXY),
help=_('''
String mapping protocol to the URL of the proxy
(e.g. http:foo.bar:3128). You can specify multiple
proxies with different protocols.
''')
help="""
String mapping protocol to the URL of the proxy
(e.g. http:http://foo.bar:3128). You can specify multiple proxies with
different protocols.
"""
)
network.add_argument(
'--follow', default=False, action='store_true',
help=_('''
Set this flag if full redirects are allowed
(e.g. re-POST-ing of data at new ``Location``)
''')
'--follow',
default=False,
action='store_true',
help="""
Set this flag if full redirects are allowed (e.g. re-POST-ing of data at
new Location).
"""
)
network.add_argument(
'--verify', default='yes',
help=_('''
Set to "no" to skip checking the host\'s SSL certificate.
You can also pass the path to a CA_BUNDLE
file for private certs. You can also set
the REQUESTS_CA_BUNDLE environment variable.
Defaults to "yes".
''')
'--verify',
default='yes',
help="""
Set to "no" to skip checking the host's SSL certificate. You can also pass
the path to a CA_BUNDLE file for private certs. You can also set the
REQUESTS_CA_BUNDLE environment variable. Defaults to "yes".
"""
)
network.add_argument(
'--timeout', type=float, default=30, metavar='SECONDS',
help=_('''
The connection timeout of the request in seconds.
The default value is 30 seconds.
''')
'--cert',
default=None,
type=readable_file_arg,
help="""
You can specify a local cert to use as client side SSL certificate.
This file may either contain both private key and certificate or you may
specify --cert-key separately.
"""
)
network.add_argument(
'--cert-key',
default=None,
type=readable_file_arg,
help="""
The private key to use with SSL. Only needed if --cert is given and the
certificate file does not contain the private key.
"""
)
network.add_argument(
'--timeout',
type=float,
default=30,
metavar='SECONDS',
help="""
The connection timeout of the request in seconds. The default value is
30 seconds.
"""
)
network.add_argument(
'--check-status', default=False, action='store_true',
help=_('''
By default, HTTPie exits with 0 when no network or other fatal
errors occur.
'--check-status',
default=False,
action='store_true',
help="""
By default, HTTPie exits with 0 when no network or other fatal errors
occur. This flag instructs HTTPie to also check the HTTP status code and
exit with an error if the status indicates one.
This flag instructs HTTPie to also check the HTTP status code and
exit with an error if the status indicates one.
When the server replies with a 4xx (Client Error) or 5xx (Server Error)
status code, HTTPie exits with 4 or 5 respectively. If the response is a
3xx (Redirect) and --follow hasn't been set, then the exit status is 3.
Also an error message is written to stderr if stdout is redirected.
When the server replies with a 4xx (Client Error) or 5xx
(Server Error) status code, HTTPie exits with 4 or 5 respectively.
If the response is a 3xx (Redirect) and --follow
hasn't been set, then the exit status is 3.
Also an error message is written to stderr if stdout is redirected.
''')
"""
)
###############################################################################
#######################################################################
# Troubleshooting
###############################################################################
#######################################################################
troubleshooting = parser.add_argument_group(title='Troubleshooting')
troubleshooting.add_argument(
'--ignore-stdin',
action='store_true',
default=False,
help="""
Do not attempt to read stdin.
"""
)
troubleshooting.add_argument(
'--help',
action='help', default=SUPPRESS,
help='Show this help message and exit'
)
troubleshooting.add_argument('--version', action='version', version=__version__)
troubleshooting.add_argument(
'--traceback', action='store_true', default=False,
help='Prints exception traceback should one occur.'
action='help',
default=SUPPRESS,
help="""
Show this help message and exit.
"""
)
troubleshooting.add_argument(
'--debug', action='store_true', default=False,
help=_('''
Prints exception traceback should one occur, and also other
information that is useful for debugging HTTPie itself and
for bug reports.
''')
'--version',
action='version',
version=__version__,
help="""
Show version and exit.
"""
)
troubleshooting.add_argument(
'--traceback',
action='store_true',
default=False,
help="""
Prints exception traceback should one occur.
"""
)
troubleshooting.add_argument(
'--debug',
action='store_true',
default=False,
help="""
Prints exception traceback should one occur, and also other information
that is useful for debugging HTTPie itself and for reporting bugs.
"""
)

View File

@ -3,11 +3,11 @@ import sys
from pprint import pformat
import requests
import requests.auth
from requests.defaults import defaults
from . import sessions
from . import __version__
from httpie import sessions
from httpie import __version__
from httpie.compat import str
from httpie.plugins import plugin_manager
FORM = 'application/x-www-form-urlencoded; charset=utf-8'
@ -18,71 +18,106 @@ DEFAULT_UA = 'HTTPie/%s' % __version__
def get_response(args, config_dir):
"""Send the request and return a `request.Response`."""
requests_kwargs = get_requests_kwargs(args)
if args.debug:
sys.stderr.write(
'\n>>> requests.request(%s)\n\n' % pformat(requests_kwargs))
if not args.session and not args.session_read_only:
return requests.request(**requests_kwargs)
requests_kwargs = get_requests_kwargs(args)
if args.debug:
dump_request(requests_kwargs)
response = requests.request(**requests_kwargs)
else:
return sessions.get_response(
response = sessions.get_response(
args=args,
config_dir=config_dir,
name=args.session or args.session_read_only,
request_kwargs=requests_kwargs,
session_name=args.session or args.session_read_only,
read_only=bool(args.session_read_only),
)
return response
def get_requests_kwargs(args):
"""Translate our `args` into `requests.request` keyword arguments."""
base_headers = defaults['base_headers'].copy()
base_headers['User-Agent'] = DEFAULT_UA
def dump_request(kwargs):
sys.stderr.write('\n>>> requests.request(%s)\n\n'
% pformat(kwargs))
def encode_headers(headers):
# This allows for unicode headers which is non-standard but practical.
# See: https://github.com/jakubroztocil/httpie/issues/212
return dict(
(name, value.encode('utf8') if isinstance(value, str) else value)
for name, value in headers.items()
)
def get_default_headers(args):
default_headers = {
'User-Agent': DEFAULT_UA
}
auto_json = args.data and not args.form
# FIXME: Accept is set to JSON with `http url @./file.txt`.
if args.json or auto_json:
base_headers['Accept'] = 'application/json'
if args.data:
base_headers['Content-Type'] = JSON
if isinstance(args.data, dict):
# If not empty, serialize the data `dict` parsed from arguments.
# Otherwise set it to `None` avoid sending "{}".
args.data = json.dumps(args.data) if args.data else None
default_headers['Accept'] = 'application/json'
if args.json or (auto_json and args.data):
default_headers['Content-Type'] = JSON
elif args.form and not args.files:
# If sending files, `requests` will set
# the `Content-Type` for us.
base_headers['Content-Type'] = FORM
default_headers['Content-Type'] = FORM
return default_headers
def get_requests_kwargs(args, base_headers=None):
"""
Translate our `args` into `requests.request` keyword arguments.
"""
# Serialize JSON data, if needed.
data = args.data
auto_json = data and not args.form
if args.json or auto_json and isinstance(data, dict):
if data:
data = json.dumps(data)
else:
# We need to set data to an empty string to prevent requests
# from assigning an empty list to `response.request.data`.
data = ''
# Finalize headers.
headers = get_default_headers(args)
if base_headers:
headers.update(base_headers)
headers.update(args.headers)
headers = encode_headers(headers)
credentials = None
if args.auth:
credentials = {
'basic': requests.auth.HTTPBasicAuth,
'digest': requests.auth.HTTPDigestAuth,
}[args.auth_type](args.auth.key, args.auth.value)
auth_plugin = plugin_manager.get_auth_plugin(args.auth_type)()
credentials = auth_plugin.get_auth(args.auth.key, args.auth.value)
cert = None
if args.cert:
cert = args.cert
if args.cert_key:
cert = cert, args.cert_key
kwargs = {
'prefetch': False,
'stream': True,
'method': args.method.lower(),
'url': args.url,
'headers': args.headers,
'data': args.data,
'headers': headers,
'data': data,
'verify': {
'yes': True,
'no': False
}.get(args.verify, args.verify),
'cert': cert,
'timeout': args.timeout,
'auth': credentials,
'proxies': dict((p.key, p.value) for p in args.proxy),
'files': args.files,
'allow_redirects': args.follow,
'params': args.params,
'config': {
'base_headers': base_headers
}
}
return kwargs

158
httpie/compat.py Normal file
View File

@ -0,0 +1,158 @@
"""
Python 2.6, 2.7, and 3.x compatibility.
"""
# Borrow these from requests:
# noinspection PyUnresolvedReferences
from requests.compat import is_windows, bytes, str, is_py3, is_py26
try:
# noinspection PyUnresolvedReferences,PyCompatibility
from urllib.parse import urlsplit
except ImportError:
# noinspection PyUnresolvedReferences,PyCompatibility
from urlparse import urlsplit
try:
# noinspection PyCompatibility
from urllib.request import urlopen
except ImportError:
# noinspection PyCompatibility
from urllib2 import urlopen
try:
from collections import OrderedDict
except ImportError:
# Python 2.6 OrderedDict class, needed for headers, parameters, etc .###
# <https://pypi.python.org/pypi/ordereddict/1.1>
# noinspection PyCompatibility
from UserDict import DictMixin
# noinspection PyShadowingBuiltins
class OrderedDict(dict, DictMixin):
# Copyright (c) 2009 Raymond Hettinger
#
# Permission is hereby granted, free of charge, to any person
# obtaining a copy of this software and associated documentation files
# (the "Software"), to deal in the Software without restriction,
# including without limitation the rights to use, copy, modify, merge,
# publish, distribute, sublicense, and/or sell copies of the Software,
# and to permit persons to whom the Software is furnished to do so,
# subject to the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
# OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
# OTHER DEALINGS IN THE SOFTWARE.
# noinspection PyMissingConstructor
def __init__(self, *args, **kwds):
if len(args) > 1:
raise TypeError('expected at most 1 arguments, got %d'
% len(args))
try:
self.__end
except AttributeError:
self.clear()
self.update(*args, **kwds)
def clear(self):
self.__end = end = []
# noinspection PyUnusedLocal
end += [None, end, end] # sentinel node for doubly linked list
self.__map = {} # key --> [key, prev, next]
dict.clear(self)
def __setitem__(self, key, value):
if key not in self:
end = self.__end
curr = end[1]
curr[2] = end[1] = self.__map[key] = [key, curr, end]
dict.__setitem__(self, key, value)
def __delitem__(self, key):
dict.__delitem__(self, key)
key, prev, next = self.__map.pop(key)
prev[2] = next
next[1] = prev
def __iter__(self):
end = self.__end
curr = end[2]
while curr is not end:
yield curr[0]
curr = curr[2]
def __reversed__(self):
end = self.__end
curr = end[1]
while curr is not end:
yield curr[0]
curr = curr[1]
def popitem(self, last=True):
if not self:
raise KeyError('dictionary is empty')
if last:
key = reversed(self).next()
else:
key = iter(self).next()
value = self.pop(key)
return key, value
def __reduce__(self):
items = [[k, self[k]] for k in self]
tmp = self.__map, self.__end
del self.__map, self.__end
inst_dict = vars(self).copy()
self.__map, self.__end = tmp
if inst_dict:
return self.__class__, (items,), inst_dict
return self.__class__, (items,)
def keys(self):
return list(self)
setdefault = DictMixin.setdefault
update = DictMixin.update
pop = DictMixin.pop
values = DictMixin.values
items = DictMixin.items
iterkeys = DictMixin.iterkeys
itervalues = DictMixin.itervalues
iteritems = DictMixin.iteritems
def __repr__(self):
if not self:
return '%s()' % (self.__class__.__name__,)
return '%s(%r)' % (self.__class__.__name__, self.items())
def copy(self):
return self.__class__(self)
# noinspection PyMethodOverriding
@classmethod
def fromkeys(cls, iterable, value=None):
d = cls()
for key in iterable:
d[key] = value
return d
def __eq__(self, other):
if isinstance(other, OrderedDict):
if len(self) != len(other):
return False
for p, q in zip(self.items(), other.items()):
if p != q:
return False
return True
return dict.__eq__(self, other)
def __ne__(self, other):
return not self == other

View File

@ -2,8 +2,8 @@ import os
import json
import errno
from . import __version__
from requests.compat import is_windows
from httpie import __version__
from httpie.compat import is_windows
DEFAULT_CONFIG_DIR = os.environ.get(
@ -16,29 +16,29 @@ DEFAULT_CONFIG_DIR = os.environ.get(
class BaseConfigDict(dict):
name = None
help = None
directory=DEFAULT_CONFIG_DIR
def __init__(self, directory=None, *args, **kwargs):
super(BaseConfigDict, self).__init__(*args, **kwargs)
if directory:
self.directory = directory
helpurl = None
about = None
def __getattr__(self, item):
return self[item]
def _get_path(self):
"""Return the config file path without side-effects."""
raise NotImplementedError()
@property
def path(self):
"""Return the config file path creating basedir, if needed."""
path = self._get_path()
try:
os.makedirs(self.directory, mode=0o700)
os.makedirs(os.path.dirname(path), mode=0o700)
except OSError as e:
if e.errno != errno.EEXIST:
raise
return os.path.join(self.directory, self.name + '.json')
return path
@property
def is_new(self):
return not os.path.exists(self.path)
return not os.path.exists(self._get_path())
def load(self):
try:
@ -48,7 +48,7 @@ class BaseConfigDict(dict):
except ValueError as e:
raise ValueError(
'Invalid %s JSON: %s [%s]' %
(type(self).__name__, e.message, self.path)
(type(self).__name__, str(e), self.path)
)
self.update(data)
except IOError as e:
@ -56,7 +56,15 @@ class BaseConfigDict(dict):
raise
def save(self):
self['__version__'] = __version__
self['__meta__'] = {
'httpie': __version__
}
if self.helpurl:
self['__meta__']['help'] = self.helpurl
if self.about:
self['__meta__']['about'] = self.about
with open(self.path, 'w') as f:
json.dump(self, f, indent=4, sort_keys=True, ensure_ascii=True)
f.write('\n')
@ -72,12 +80,18 @@ class BaseConfigDict(dict):
class Config(BaseConfigDict):
name = 'config'
helpurl = 'https://github.com/jakubroztocil/httpie#config'
about = 'HTTPie configuration file'
DEFAULTS = {
'implicit_content_type': 'json',
'default_options': []
}
def __init__(self, *args, **kwargs):
super(Config, self).__init__(*args, **kwargs)
def __init__(self, directory=DEFAULT_CONFIG_DIR):
super(Config, self).__init__()
self.update(self.DEFAULTS)
self.directory = directory
def _get_path(self):
return os.path.join(self.directory, self.name + '.json')

85
httpie/context.py Normal file
View File

@ -0,0 +1,85 @@
import sys
from requests.compat import is_windows
from httpie.config import DEFAULT_CONFIG_DIR, Config
class Environment(object):
"""
Information about the execution context
(standard streams, config directory, etc).
By default, it represents the actual environment.
All of the attributes can be overwritten though, which
is used by the test suite to simulate various scenarios.
"""
is_windows = is_windows
config_dir = DEFAULT_CONFIG_DIR
stdin = sys.stdin
stdin_isatty = stdin.isatty()
stdin_encoding = None
stdout = sys.stdout
stdout_isatty = stdout.isatty()
stdout_encoding = None
stderr = sys.stderr
stderr_isatty = stderr.isatty()
colors = 256
if not is_windows:
import curses
try:
curses.setupterm()
try:
colors = curses.tigetnum('colors')
except TypeError:
# pypy3 (2.4.0)
colors = curses.tigetnum(b'colors')
except curses.error:
pass
del curses
else:
# noinspection PyUnresolvedReferences
import colorama.initialise
stdout = colorama.initialise.wrap_stream(
stdout, convert=None, strip=None,
autoreset=True, wrap=True
)
stderr = colorama.initialise.wrap_stream(
stderr, convert=None, strip=None,
autoreset=True, wrap=True
)
del colorama
def __init__(self, **kwargs):
"""
Use keyword arguments to overwrite
any of the class attributes for this instance.
"""
assert all(hasattr(type(self), attr) for attr in kwargs.keys())
self.__dict__.update(**kwargs)
# Keyword arguments > stream.encoding > default utf8
if self.stdin_encoding is None:
self.stdin_encoding = getattr(
self.stdin, 'encoding', None) or 'utf8'
if self.stdout_encoding is None:
actual_stdout = self.stdout
if is_windows:
# noinspection PyUnresolvedReferences
from colorama import AnsiToWin32
if isinstance(self.stdout, AnsiToWin32):
actual_stdout = self.stdout.wrapped
self.stdout_encoding = getattr(
actual_stdout, 'encoding', None) or 'utf8'
@property
def config(self):
if not hasattr(self, '_config'):
self._config = Config(directory=self.config_dir)
if self._config.is_new():
self._config.save()
else:
self._config.load()
return self._config

View File

@ -2,47 +2,50 @@
Invocation flow:
1. Read, validate and process the input (args, `stdin`).
2. Create and send a request.
3. Stream, and possibly process and format, the requested parts
of the request-response exchange.
4. Simultaneously write to `stdout`
5. Exit.
1. Read, validate and process the input (args, `stdin`).
2. Create and send a request.
3. Stream, and possibly process and format, the parts
of the request-response exchange selected by output options.
4. Simultaneously write to `stdout`
5. Exit.
"""
import sys
import errno
import requests
from requests.compat import str, is_py3
from httpie import __version__ as httpie_version
from requests import __version__ as requests_version
from pygments import __version__ as pygments_version
from .cli import parser
from .client import get_response
from .models import Environment
from .output import output_stream, write, write_with_colors_win_p3k
from . import EXIT
from httpie import __version__ as httpie_version, ExitStatus
from httpie.compat import str, bytes, is_py3
from httpie.client import get_response
from httpie.downloads import Download
from httpie.context import Environment
from httpie.plugins import plugin_manager
from httpie.output.streams import (
build_output_stream,
write, write_with_colors_win_py3
)
def get_exist_status(code, follow=False):
"""Translate HTTP status code to exit status."""
if 300 <= code <= 399 and not follow:
def get_exit_status(http_status, follow=False):
"""Translate HTTP status code to exit status code."""
if 300 <= http_status <= 399 and not follow:
# Redirect
return EXIT.ERROR_HTTP_3XX
elif 400 <= code <= 499:
return ExitStatus.ERROR_HTTP_3XX
elif 400 <= http_status <= 499:
# Client Error
return EXIT.ERROR_HTTP_4XX
elif 500 <= code <= 599:
return ExitStatus.ERROR_HTTP_4XX
elif 500 <= http_status <= 599:
# Server Error
return EXIT.ERROR_HTTP_5XX
return ExitStatus.ERROR_HTTP_5XX
else:
return EXIT.OK
return ExitStatus.OK
def print_debug_info(env):
sys.stderr.writelines([
env.stderr.writelines([
'HTTPie %s\n' % httpie_version,
'HTTPie data: %s\n' % env.config.directory,
'Requests %s\n' % requests_version,
@ -51,73 +54,140 @@ def print_debug_info(env):
])
def decode_args(args, stdin_encoding):
"""
Convert all bytes ags to str
by decoding them using stdin encoding.
"""
return [
arg.decode(stdin_encoding)
if type(arg) == bytes else arg
for arg in args
]
def main(args=sys.argv[1:], env=Environment()):
"""Run the main program and write the output to ``env.stdout``.
Return exit status.
Return exit status code.
"""
args = decode_args(args, env.stdin_encoding)
plugin_manager.load_installed_plugins()
from httpie.cli import parser
if env.config.default_options:
args = env.config.default_options + args
def error(msg, *args):
def error(msg, *args, **kwargs):
msg = msg % args
env.stderr.write('\nhttp: error: %s\n' % msg)
level = kwargs.get('level', 'error')
env.stderr.write('\nhttp: %s: %s\n' % (level, msg))
debug = '--debug' in args
traceback = debug or '--traceback' in args
status = EXIT.OK
exit_status = ExitStatus.OK
if debug:
print_debug_info(env)
if args == ['--debug']:
sys.exit(EXIT.OK)
return exit_status
download = None
try:
args = parser.parse_args(args=args, env=env)
if args.download:
args.follow = True # --download implies --follow.
download = Download(
output_file=args.output_file,
progress_file=env.stderr,
resume=args.download_resume
)
download.pre_request(args.headers)
response = get_response(args, config_dir=env.config.directory)
if args.check_status:
status = get_exist_status(response.status_code,
args.follow)
if status and not env.stdout_isatty:
error('%s %s', response.raw.status, response.raw.reason)
if args.check_status or download:
stream = output_stream(args, env, response.request, response)
exit_status = get_exit_status(
http_status=response.status_code,
follow=args.follow
)
if not env.stdout_isatty and exit_status != ExitStatus.OK:
error('HTTP %s %s',
response.raw.status,
response.raw.reason,
level='warning')
write_kwargs = {
'stream': stream,
'stream': build_output_stream(
args, env, response.request, response),
# This will in fact be `stderr` with `--download`
'outfile': env.stdout,
'flush': env.stdout_isatty or args.stream
}
try:
if env.is_windows and is_py3 and 'colors' in args.prettify:
write_with_colors_win_p3k(**write_kwargs)
write_with_colors_win_py3(**write_kwargs)
else:
write(**write_kwargs)
if download and exit_status == ExitStatus.OK:
# Response body download.
download_stream, download_to = download.start(response)
write(
stream=download_stream,
outfile=download_to,
flush=False,
)
download.finish()
if download.interrupted:
exit_status = ExitStatus.ERROR
error('Incomplete download: size=%d; downloaded=%d' % (
download.status.total_size,
download.status.downloaded
))
except IOError as e:
if not traceback and e.errno == errno.EPIPE:
# Ignore broken pipes unless --traceback.
env.stderr.write('\n')
else:
raise
except (KeyboardInterrupt, SystemExit):
except KeyboardInterrupt:
if traceback:
raise
env.stderr.write('\n')
status = EXIT.ERROR
exit_status = ExitStatus.ERROR
except SystemExit as e:
if e.code != ExitStatus.OK:
if traceback:
raise
env.stderr.write('\n')
exit_status = ExitStatus.ERROR
except requests.Timeout:
status = EXIT.ERROR_TIMEOUT
exit_status = ExitStatus.ERROR_TIMEOUT
error('Request timed out (%ss).', args.timeout)
except Exception as e:
# TODO: distinguish between expected and unexpected errors.
# network errors vs. bugs, etc.
# TODO: Better distinction between expected and unexpected errors.
# Network errors vs. bugs, etc.
if traceback:
raise
error('%s: %s', type(e).__name__, str(e))
status = EXIT.ERROR
exit_status = ExitStatus.ERROR
return status
finally:
if download and not download.finished:
download.failed()
return exit_status

434
httpie/downloads.py Normal file
View File

@ -0,0 +1,434 @@
# coding=utf-8
"""
Download mode implementation.
"""
from __future__ import division
import os
import re
import sys
import mimetypes
import threading
from time import sleep, time
from mailbox import Message
from httpie.output.streams import RawStream
from httpie.models import HTTPResponse
from httpie.utils import humanize_bytes
from httpie.compat import urlsplit
PARTIAL_CONTENT = 206
CLEAR_LINE = '\r\033[K'
PROGRESS = (
'{percentage: 6.2f} %'
' {downloaded: >10}'
' {speed: >10}/s'
' {eta: >8} ETA'
)
PROGRESS_NO_CONTENT_LENGTH = '{downloaded: >10} {speed: >10}/s'
SUMMARY = 'Done. {downloaded} in {time:0.5f}s ({speed}/s)\n'
SPINNER = '|/-\\'
class ContentRangeError(ValueError):
pass
def parse_content_range(content_range, resumed_from):
"""
Parse and validate Content-Range header.
<http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html>
:param content_range: the value of a Content-Range response header
eg. "bytes 21010-47021/47022"
:param resumed_from: first byte pos. from the Range request header
:return: total size of the response body when fully downloaded.
"""
if content_range is None:
raise ContentRangeError('Missing Content-Range')
pattern = (
'^bytes (?P<first_byte_pos>\d+)-(?P<last_byte_pos>\d+)'
'/(\*|(?P<instance_length>\d+))$'
)
match = re.match(pattern, content_range)
if not match:
raise ContentRangeError(
'Invalid Content-Range format %r' % content_range)
content_range_dict = match.groupdict()
first_byte_pos = int(content_range_dict['first_byte_pos'])
last_byte_pos = int(content_range_dict['last_byte_pos'])
instance_length = (
int(content_range_dict['instance_length'])
if content_range_dict['instance_length']
else None
)
# "A byte-content-range-spec with a byte-range-resp-spec whose
# last- byte-pos value is less than its first-byte-pos value,
# or whose instance-length value is less than or equal to its
# last-byte-pos value, is invalid. The recipient of an invalid
# byte-content-range- spec MUST ignore it and any content
# transferred along with it."
if (first_byte_pos >= last_byte_pos
or (instance_length is not None
and instance_length <= last_byte_pos)):
raise ContentRangeError(
'Invalid Content-Range returned: %r' % content_range)
if (first_byte_pos != resumed_from
or (instance_length is not None
and last_byte_pos + 1 != instance_length)):
# Not what we asked for.
raise ContentRangeError(
'Unexpected Content-Range returned (%r)'
' for the requested Range ("bytes=%d-")'
% (content_range, resumed_from)
)
return last_byte_pos + 1
def filename_from_content_disposition(content_disposition):
"""
Extract and validate filename from a Content-Disposition header.
:param content_disposition: Content-Disposition value
:return: the filename if present and valid, otherwise `None`
"""
# attachment; filename=jakubroztocil-httpie-0.4.1-20-g40bd8f6.tar.gz
msg = Message('Content-Disposition: %s' % content_disposition)
filename = msg.get_filename()
if filename:
# Basic sanitation.
filename = os.path.basename(filename).lstrip('.').strip()
if filename:
return filename
def filename_from_url(url, content_type):
fn = urlsplit(url).path.rstrip('/')
fn = os.path.basename(fn) if fn else 'index'
if '.' not in fn and content_type:
content_type = content_type.split(';')[0]
if content_type == 'text/plain':
# mimetypes returns '.ksh'
ext = '.txt'
else:
ext = mimetypes.guess_extension(content_type)
if ext == '.htm': # Python 3
ext = '.html'
if ext:
fn += ext
return fn
def get_unique_filename(filename, exists=os.path.exists):
attempt = 0
while True:
suffix = '-' + str(attempt) if attempt > 0 else ''
if not exists(filename + suffix):
return filename + suffix
attempt += 1
class Download(object):
def __init__(self, output_file=None,
resume=False, progress_file=sys.stderr):
"""
:param resume: Should the download resume if partial download
already exists.
:type resume: bool
:param output_file: The file to store response body in. If not
provided, it will be guessed from the response.
:param progress_file: Where to report download progress.
"""
self._output_file = output_file
self._resume = resume
self._resumed_from = 0
self.finished = False
self.status = Status()
self._progress_reporter = ProgressReporterThread(
status=self.status,
output=progress_file
)
def pre_request(self, request_headers):
"""Called just before the HTTP request is sent.
Might alter `request_headers`.
:type request_headers: dict
"""
# Disable content encoding so that we can resume, etc.
request_headers['Accept-Encoding'] = None
if self._resume:
bytes_have = os.path.getsize(self._output_file.name)
if bytes_have:
# Set ``Range`` header to resume the download
# TODO: Use "If-Range: mtime" to make sure it's fresh?
request_headers['Range'] = 'bytes=%d-' % bytes_have
self._resumed_from = bytes_have
def start(self, response):
"""
Initiate and return a stream for `response` body with progress
callback attached. Can be called only once.
:param response: Initiated response object with headers already fetched
:type response: requests.models.Response
:return: RawStream, output_file
"""
assert not self.status.time_started
try:
total_size = int(response.headers['Content-Length'])
except (KeyError, ValueError, TypeError):
total_size = None
if self._output_file:
if self._resume and response.status_code == PARTIAL_CONTENT:
total_size = parse_content_range(
response.headers.get('Content-Range'),
self._resumed_from
)
else:
self._resumed_from = 0
try:
self._output_file.seek(0)
self._output_file.truncate()
except IOError:
pass # stdout
else:
# TODO: Should the filename be taken from response.history[0].url?
# Output file not specified. Pick a name that doesn't exist yet.
filename = None
if 'Content-Disposition' in response.headers:
filename = filename_from_content_disposition(
response.headers['Content-Disposition'])
if not filename:
filename = filename_from_url(
url=response.url,
content_type=response.headers.get('Content-Type'),
)
self._output_file = open(get_unique_filename(filename), mode='a+b')
self.status.started(
resumed_from=self._resumed_from,
total_size=total_size
)
stream = RawStream(
msg=HTTPResponse(response),
with_headers=False,
with_body=True,
on_body_chunk_downloaded=self.chunk_downloaded,
chunk_size=1024 * 8
)
self._progress_reporter.output.write(
'Downloading %sto "%s"\n' % (
(humanize_bytes(total_size) + ' '
if total_size is not None
else ''),
self._output_file.name
)
)
self._progress_reporter.start()
return stream, self._output_file
def finish(self):
assert not self.finished
self.finished = True
self.status.finished()
def failed(self):
self._progress_reporter.stop()
@property
def interrupted(self):
return (
self.finished
and self.status.total_size
and self.status.total_size != self.status.downloaded
)
def chunk_downloaded(self, chunk):
"""
A download progress callback.
:param chunk: A chunk of response body data that has just
been downloaded and written to the output.
:type chunk: bytes
"""
self.status.chunk_downloaded(len(chunk))
class Status(object):
"""Holds details about the downland status."""
def __init__(self):
self.downloaded = 0
self.total_size = None
self.resumed_from = 0
self.time_started = None
self.time_finished = None
def started(self, resumed_from=0, total_size=None):
assert self.time_started is None
if total_size is not None:
self.total_size = total_size
self.downloaded = self.resumed_from = resumed_from
self.time_started = time()
def chunk_downloaded(self, size):
assert self.time_finished is None
self.downloaded += size
@property
def has_finished(self):
return self.time_finished is not None
def finished(self):
assert self.time_started is not None
assert self.time_finished is None
self.time_finished = time()
class ProgressReporterThread(threading.Thread):
"""
Reports download progress based on its status.
Uses threading to periodically update the status (speed, ETA, etc.).
"""
def __init__(self, status, output, tick=.1, update_interval=1):
"""
:type status: Status
:type output: file
"""
super(ProgressReporterThread, self).__init__()
self.status = status
self.output = output
self._tick = tick
self._update_interval = update_interval
self._spinner_pos = 0
self._status_line = ''
self._prev_bytes = 0
self._prev_time = time()
self._should_stop = threading.Event()
def stop(self):
"""Stop reporting on next tick."""
self._should_stop.set()
def run(self):
while not self._should_stop.is_set():
if self.status.has_finished:
self.sum_up()
break
self.report_speed()
sleep(self._tick)
def report_speed(self):
now = time()
if now - self._prev_time >= self._update_interval:
downloaded = self.status.downloaded
try:
speed = ((downloaded - self._prev_bytes)
/ (now - self._prev_time))
except ZeroDivisionError:
speed = 0
if not self.status.total_size:
self._status_line = PROGRESS_NO_CONTENT_LENGTH.format(
downloaded=humanize_bytes(downloaded),
speed=humanize_bytes(speed),
)
else:
try:
percentage = downloaded / self.status.total_size * 100
except ZeroDivisionError:
percentage = 0
if not speed:
eta = '-:--:--'
else:
s = int((self.status.total_size - downloaded) / speed)
h, s = divmod(s, 60 * 60)
m, s = divmod(s, 60)
eta = '{0}:{1:0>2}:{2:0>2}'.format(h, m, s)
self._status_line = PROGRESS.format(
percentage=percentage,
downloaded=humanize_bytes(downloaded),
speed=humanize_bytes(speed),
eta=eta,
)
self._prev_time = now
self._prev_bytes = downloaded
self.output.write(
CLEAR_LINE
+ ' '
+ SPINNER[self._spinner_pos]
+ ' '
+ self._status_line
)
self.output.flush()
self._spinner_pos = (self._spinner_pos + 1
if self._spinner_pos + 1 != len(SPINNER)
else 0)
def sum_up(self):
actually_downloaded = (self.status.downloaded
- self.status.resumed_from)
time_taken = self.status.time_finished - self.status.time_started
self.output.write(CLEAR_LINE)
try:
speed = actually_downloaded / time_taken
except ZeroDivisionError:
# Either time is 0 (not all systems provide `time.time`
# with a better precision than 1 second), and/or nothing
# has been downloaded.
speed = actually_downloaded
self.output.write(SUMMARY.format(
downloaded=humanize_bytes(actually_downloaded),
total=(self.status.total_size
and humanize_bytes(self.status.total_size)),
speed=humanize_bytes(speed),
time=time_taken,
))
self.output.flush()

View File

@ -4,19 +4,21 @@
import os
import sys
import re
import json
import errno
import mimetypes
import getpass
from io import BytesIO
from argparse import ArgumentParser, ArgumentTypeError
try:
from collections import OrderedDict
except ImportError:
OrderedDict = dict
from collections import namedtuple
# noinspection PyCompatibility
from argparse import ArgumentParser, ArgumentTypeError, ArgumentError
# TODO: Use MultiDict for headers once added to `requests`.
# https://github.com/jakubroztocil/httpie/issues/130
from requests.structures import CaseInsensitiveDict
from requests.compat import str, urlparse
from httpie.compat import OrderedDict, urlsplit, str
from httpie.sessions import VALID_SESSION_NAME_PATTERN
from httpie.utils import load_json_preserve_order
HTTP_POST = 'POST'
@ -32,22 +34,40 @@ SEP_PROXY = ':'
SEP_DATA = '='
SEP_DATA_RAW_JSON = ':='
SEP_FILES = '@'
SEP_DATA_EMBED_FILE = '=@'
SEP_DATA_EMBED_RAW_JSON_FILE = ':=@'
SEP_QUERY = '=='
# Separators that become request data
SEP_GROUP_DATA_ITEMS = frozenset([
SEP_DATA,
SEP_DATA_RAW_JSON,
SEP_FILES
SEP_FILES,
SEP_DATA_EMBED_FILE,
SEP_DATA_EMBED_RAW_JSON_FILE
])
# Separators for items whose value is a filename to be embedded
SEP_GROUP_DATA_EMBED_ITEMS = frozenset([
SEP_DATA_EMBED_FILE,
SEP_DATA_EMBED_RAW_JSON_FILE,
])
# Separators for raw JSON items
SEP_GROUP_RAW_JSON_ITEMS = frozenset([
SEP_DATA_RAW_JSON,
SEP_DATA_EMBED_RAW_JSON_FILE,
])
# Separators allowed in ITEM arguments
SEP_GROUP_ITEMS = frozenset([
SEP_GROUP_ALL_ITEMS = frozenset([
SEP_HEADERS,
SEP_QUERY,
SEP_DATA,
SEP_DATA_RAW_JSON,
SEP_FILES
SEP_FILES,
SEP_DATA_EMBED_FILE,
SEP_DATA_EMBED_RAW_JSON_FILE,
])
@ -91,41 +111,46 @@ class Parser(ArgumentParser):
kwargs['add_help'] = False
super(Parser, self).__init__(*args, **kwargs)
#noinspection PyMethodOverriding
# noinspection PyMethodOverriding
def parse_args(self, env, args=None, namespace=None):
self.env = env
self.args, no_options = super(Parser, self)\
.parse_known_args(args, namespace)
args = super(Parser, self).parse_args(args, namespace)
if self.args.debug:
self.args.traceback = True
if not args.json and env.config.implicit_content_type == 'form':
args.form = True
# Arguments processing and environment setup.
self._apply_no_options(no_options)
self._apply_config()
self._validate_download_options()
self._setup_standard_streams()
self._process_output_options()
self._process_pretty_options()
self._guess_method()
self._parse_items()
if not self.args.ignore_stdin and not env.stdin_isatty:
self._body_from_file(self.env.stdin)
if not (self.args.url.startswith((HTTP, HTTPS))):
scheme = HTTP
if args.debug:
args.traceback = True
# See if we're using curl style shorthand for localhost (:3000/foo)
shorthand = re.match(r'^:(?!:)(\d*)(/?.*)$', self.args.url)
if shorthand:
port = shorthand.group(1)
rest = shorthand.group(2)
self.args.url = scheme + 'localhost'
if port:
self.args.url += ':' + port
self.args.url += rest
else:
self.args.url = scheme + self.args.url
self._process_auth()
if args.output:
env.stdout = args.output
env.stdout_isatty = False
self._process_output_options(args, env)
self._process_pretty_options(args, env)
self._guess_method(args, env)
self._parse_items(args)
if not env.stdin_isatty:
self._body_from_file(args, env.stdin)
if not (args.url.startswith(HTTP) or args.url.startswith(HTTPS)):
scheme = HTTPS if env.progname == 'https' else HTTP
args.url = scheme + args.url
if args.auth and not args.auth.has_password():
# Stdin already read (if not a tty) so it's save to prompt.
args.auth.prompt_password(urlparse(args.url).netloc)
return args
return self.args
# noinspection PyShadowingBuiltins
def _print_message(self, message, file=None):
# Sneak in our stderr/stdout.
file = {
@ -133,116 +158,235 @@ class Parser(ArgumentParser):
sys.stderr: self.env.stderr,
None: self.env.stderr
}.get(file, file)
if not hasattr(file, 'buffer') and isinstance(message, str):
message = message.encode(self.env.stdout_encoding)
super(Parser, self)._print_message(message, file)
def _body_from_file(self, args, fd):
def _setup_standard_streams(self):
"""
Modify `env.stdout` and `env.stdout_isatty` based on args, if needed.
"""
if not self.env.stdout_isatty and self.args.output_file:
self.error('Cannot use --output, -o with redirected output.')
if self.args.download:
# FIXME: Come up with a cleaner solution.
if not self.env.stdout_isatty:
# Use stdout as the download output file.
self.args.output_file = self.env.stdout
# With `--download`, we write everything that would normally go to
# `stdout` to `stderr` instead. Let's replace the stream so that
# we don't have to use many `if`s throughout the codebase.
# The response body will be treated separately.
self.env.stdout = self.env.stderr
self.env.stdout_isatty = self.env.stderr_isatty
elif self.args.output_file:
# When not `--download`ing, then `--output` simply replaces
# `stdout`. The file is opened for appending, which isn't what
# we want in this case.
self.args.output_file.seek(0)
try:
self.args.output_file.truncate()
except IOError as e:
if e.errno == errno.EINVAL:
# E.g. /dev/null on Linux.
pass
else:
raise
self.env.stdout = self.args.output_file
self.env.stdout_isatty = False
def _apply_config(self):
if (not self.args.json
and self.env.config.implicit_content_type == 'form'):
self.args.form = True
def _process_auth(self):
"""
If only a username provided via --auth, then ask for a password.
Or, take credentials from the URL, if provided.
"""
url = urlsplit(self.args.url)
if self.args.auth:
if not self.args.auth.has_password():
# Stdin already read (if not a tty) so it's save to prompt.
if self.args.ignore_stdin:
self.error('Unable to prompt for passwords because'
' --ignore-stdin is set.')
self.args.auth.prompt_password(url.netloc)
elif url.username is not None:
# Handle http://username:password@hostname/
username = url.username
password = url.password or ''
self.args.auth = AuthCredentials(
key=username,
value=password,
sep=SEP_CREDENTIALS,
orig=SEP_CREDENTIALS.join([username, password])
)
def _apply_no_options(self, no_options):
"""For every `--no-OPTION` in `no_options`, set `args.OPTION` to
its default value. This allows for un-setting of options, e.g.,
specified in config.
"""
invalid = []
for option in no_options:
if not option.startswith('--no-'):
invalid.append(option)
continue
# --no-option => --option
inverted = '--' + option[5:]
for action in self._actions:
if inverted in action.option_strings:
setattr(self.args, action.dest, action.default)
break
else:
invalid.append(option)
if invalid:
msg = 'unrecognized arguments: %s'
self.error(msg % ' '.join(invalid))
def _body_from_file(self, fd):
"""There can only be one source of request data.
Bytes are always read.
"""
if args.data:
if self.args.data:
self.error('Request body (from stdin or a file) and request '
'data (key=value) cannot be mixed.')
args.data = getattr(fd, 'buffer', fd).read()
self.args.data = getattr(fd, 'buffer', fd).read()
def _guess_method(self, args, env):
def _guess_method(self):
"""Set `args.method` if not specified to either POST or GET
based on whether the request has data or not.
"""
if args.method is None:
if self.args.method is None:
# Invoked as `http URL'.
assert not args.items
if not env.stdin_isatty:
args.method = HTTP_POST
assert not self.args.items
if not self.args.ignore_stdin and not self.env.stdin_isatty:
self.args.method = HTTP_POST
else:
args.method = HTTP_GET
self.args.method = HTTP_GET
# FIXME: False positive, e.g., "localhost" matches but is a valid URL.
elif not re.match('^[a-zA-Z]+$', args.method):
elif not re.match('^[a-zA-Z]+$', self.args.method):
# Invoked as `http URL item+'. The URL is now in `args.method`
# and the first ITEM is now incorrectly in `args.url`.
try:
# Parse the URL as an ITEM and store it as the first ITEM arg.
args.items.insert(
0, KeyValueArgType(*SEP_GROUP_ITEMS).__call__(args.url))
self.args.items.insert(0, KeyValueArgType(
*SEP_GROUP_ALL_ITEMS).__call__(self.args.url))
except ArgumentTypeError as e:
if args.traceback:
if self.args.traceback:
raise
self.error(e.message)
self.error(e.args[0])
else:
# Set the URL correctly
args.url = args.method
self.args.url = self.args.method
# Infer the method
has_data = not env.stdin_isatty or any(
item.sep in SEP_GROUP_DATA_ITEMS for item in args.items)
args.method = HTTP_POST if has_data else HTTP_GET
has_data = (
(not self.args.ignore_stdin and not self.env.stdin_isatty)
or any(item.sep in SEP_GROUP_DATA_ITEMS
for item in self.args.items)
)
self.args.method = HTTP_POST if has_data else HTTP_GET
def _parse_items(self, args):
"""Parse `args.items` into `args.headers`, `args.data`,
`args.`, and `args.files`.
def _parse_items(self):
"""Parse `args.items` into `args.headers`, `args.data`, `args.params`,
and `args.files`.
"""
args.headers = CaseInsensitiveDict()
args.data = ParamDict() if args.form else OrderedDict()
args.files = OrderedDict()
args.params = ParamDict()
try:
parse_items(items=args.items,
headers=args.headers,
data=args.data,
files=args.files,
params=args.params)
items = parse_items(
items=self.args.items,
data_class=ParamsDict if self.args.form else OrderedDict
)
except ParseError as e:
if args.traceback:
if self.args.traceback:
raise
self.error(e.message)
self.error(e.args[0])
else:
self.args.headers = items.headers
self.args.data = items.data
self.args.files = items.files
self.args.params = items.params
if args.files and not args.form:
if self.args.files and not self.args.form:
# `http url @/path/to/file`
file_fields = list(args.files.keys())
file_fields = list(self.args.files.keys())
if file_fields != ['']:
self.error(
'Invalid file fields (perhaps you meant --form?): %s'
% ','.join(file_fields))
fn, fd = args.files['']
args.files = {}
self._body_from_file(args, fd)
if 'Content-Type' not in args.headers:
fn, fd = self.args.files['']
self.args.files = {}
self._body_from_file(fd)
if 'Content-Type' not in self.args.headers:
mime, encoding = mimetypes.guess_type(fn, strict=False)
if mime:
content_type = mime
if encoding:
content_type = '%s; charset=%s' % (mime, encoding)
args.headers['Content-Type'] = content_type
self.args.headers['Content-Type'] = content_type
def _process_output_options(self, args, env):
"""Apply defaults to output options or validate the provided ones.
def _process_output_options(self):
"""Apply defaults to output options, or validate the provided ones.
The default output options are stdout-type-sensitive.
"""
if not args.output_options:
args.output_options = (OUTPUT_OPTIONS_DEFAULT if env.stdout_isatty
else OUTPUT_OPTIONS_DEFAULT_STDOUT_REDIRECTED)
if not self.args.output_options:
self.args.output_options = (
OUTPUT_OPTIONS_DEFAULT
if self.env.stdout_isatty
else OUTPUT_OPTIONS_DEFAULT_STDOUT_REDIRECTED
)
unknown = set(args.output_options) - OUTPUT_OPTIONS
if unknown:
self.error('Unknown output options: %s' % ','.join(unknown))
unknown_output_options = set(self.args.output_options) - OUTPUT_OPTIONS
if unknown_output_options:
self.error(
'Unknown output options: %s' % ','.join(unknown_output_options)
)
def _process_pretty_options(self, args, env):
if args.prettify == PRETTY_STDOUT_TTY_ONLY:
args.prettify = PRETTY_MAP['all' if env.stdout_isatty else 'none']
elif args.prettify and env.is_windows:
if self.args.download and OUT_RESP_BODY in self.args.output_options:
# Response body is always downloaded with --download and it goes
# through a different routine, so we remove it.
self.args.output_options = str(
set(self.args.output_options) - set(OUT_RESP_BODY))
def _process_pretty_options(self):
if self.args.prettify == PRETTY_STDOUT_TTY_ONLY:
self.args.prettify = PRETTY_MAP[
'all' if self.env.stdout_isatty else 'none']
elif self.args.prettify and self.env.is_windows:
self.error('Only terminal output can be colorized on Windows.')
else:
args.prettify = PRETTY_MAP[args.prettify]
# noinspection PyTypeChecker
self.args.prettify = PRETTY_MAP[self.args.prettify]
def _validate_download_options(self):
if not self.args.download:
if self.args.download_resume:
self.error('--continue only works with --download')
if self.args.download_resume and not (
self.args.download and self.args.output_file):
self.error('--continue requires --output to be specified')
class ParseError(Exception):
@ -261,6 +405,22 @@ class KeyValue(object):
def __eq__(self, other):
return self.__dict__ == other.__dict__
def __repr__(self):
return repr(self.__dict__)
class SessionNameValidator(object):
def __init__(self, error_message):
self.error_message = error_message
def __call__(self, value):
# Session name can be a path or just a name.
if (os.path.sep not in value
and not VALID_SESSION_NAME_PATTERN.search(value)):
raise ArgumentError(None, self.error_message)
return value
class KeyValueArgType(object):
"""A key-value pair argument type used with `argparse`.
@ -274,6 +434,9 @@ class KeyValueArgType(object):
def __init__(self, *separators):
self.separators = separators
self.special_characters = set('\\')
for separator in separators:
self.special_characters.update(separator)
def __call__(self, string):
"""Parse `string` and return `self.key_value_class()` instance.
@ -288,25 +451,25 @@ class KeyValueArgType(object):
class Escaped(str):
"""Represents an escaped character."""
def tokenize(s):
"""Tokenize `s`. There are only two token types - strings
def tokenize(string):
"""Tokenize `string`. There are only two token types - strings
and escaped characters:
>>> tokenize(r'foo\=bar\\baz')
['foo', Escaped('='), 'bar', Escaped('\\'), 'baz']
tokenize(r'foo\=bar\\baz')
=> ['foo', Escaped('='), 'bar', Escaped('\\'), 'baz']
"""
tokens = ['']
esc = False
for c in s:
if esc:
tokens.extend([Escaped(c), ''])
esc = False
else:
if c == '\\':
esc = True
characters = iter(string)
for char in characters:
if char == '\\':
char = next(characters, '')
if char not in self.special_characters:
tokens[-1] += '\\' + char
else:
tokens[-1] += c
tokens.extend([Escaped(char), ''])
else:
tokens[-1] += char
return tokens
tokens = tokenize(string)
@ -343,7 +506,7 @@ class KeyValueArgType(object):
else:
raise ArgumentTypeError(
'"%s" is not a valid value' % string)
u'"%s" is not a valid value' % string)
return self.key_value_class(
key=key, value=value, sep=sep, orig=string)
@ -391,7 +554,7 @@ class AuthCredentialsArgType(KeyValueArgType):
)
class ParamDict(OrderedDict):
class RequestItemsDict(OrderedDict):
"""Multi-value dict for URL parameters and form data."""
#noinspection PyMethodOverriding
@ -403,35 +566,49 @@ class ParamDict(OrderedDict):
data and URL params.
"""
# NOTE: Won't work when used for form data with multiple values
# for a field and a file field is present:
# https://github.com/kennethreitz/requests/issues/737
assert not isinstance(value, list)
if key not in self:
super(ParamDict, self).__setitem__(key, value)
super(RequestItemsDict, self).__setitem__(key, value)
else:
if not isinstance(self[key], list):
super(ParamDict, self).__setitem__(key, [self[key]])
super(RequestItemsDict, self).__setitem__(key, [self[key]])
self[key].append(value)
def parse_items(items, data=None, headers=None, files=None, params=None):
class ParamsDict(RequestItemsDict):
pass
class DataDict(RequestItemsDict):
def items(self):
for key, values in super(RequestItemsDict, self).items():
if not isinstance(values, list):
values = [values]
for value in values:
yield key, value
RequestItems = namedtuple('RequestItems',
['headers', 'data', 'files', 'params'])
def parse_items(items,
headers_class=CaseInsensitiveDict,
data_class=OrderedDict,
files_class=DataDict,
params_class=ParamsDict):
"""Parse `KeyValue` `items` into `data`, `headers`, `files`,
and `params`.
"""
if headers is None:
headers = CaseInsensitiveDict()
if data is None:
data = OrderedDict()
if files is None:
files = OrderedDict()
if params is None:
params = ParamDict()
headers = []
data = []
files = []
params = []
for item in items:
value = item.value
key = item.key
if item.sep == SEP_HEADERS:
target = headers
@ -443,21 +620,45 @@ def parse_items(items, data=None, headers=None, files=None, params=None):
value = (os.path.basename(value),
BytesIO(f.read()))
except IOError as e:
raise ParseError(
'Invalid argument "%s": %s' % (item.orig, e))
raise ParseError('"%s": %s' % (item.orig, e))
target = files
elif item.sep in [SEP_DATA, SEP_DATA_RAW_JSON]:
if item.sep == SEP_DATA_RAW_JSON:
elif item.sep in SEP_GROUP_DATA_ITEMS:
if item.sep in SEP_GROUP_DATA_EMBED_ITEMS:
try:
value = json.loads(item.value)
except ValueError:
raise ParseError('"%s" is not valid JSON' % item.orig)
with open(os.path.expanduser(value), 'rb') as f:
value = f.read().decode('utf8')
except IOError as e:
raise ParseError('"%s": %s' % (item.orig, e))
except UnicodeDecodeError:
raise ParseError(
'"%s": cannot embed the content of "%s",'
' not a UTF8 or ASCII-encoded text file'
% (item.orig, item.value)
)
if item.sep in SEP_GROUP_RAW_JSON_ITEMS:
try:
value = load_json_preserve_order(value)
except ValueError as e:
raise ParseError('"%s": %s' % (item.orig, e))
target = data
else:
raise TypeError(item)
target[key] = value
target.append((item.key, value))
return headers, data, files, params
return RequestItems(headers_class(headers),
data_class(data),
files_class(files),
params_class(params))
def readable_file_arg(filename):
try:
open(filename, 'rb')
except IOError as ex:
raise ArgumentTypeError('%s: %s' % (filename, ex.args[1]))
return filename

View File

@ -1,30 +0,0 @@
"""
Provides the `httpie' management command.
Note that the main `http' command points to `httpie.__main__.main()`.
"""
import argparse
from . import sessions
from . import __version__
parser = argparse.ArgumentParser(
description='The HTTPie management command.',
version=__version__
)
subparsers = parser.add_subparsers()
# Only sessions as of now.
sessions.add_commands(subparsers)
def main():
args = parser.parse_args()
args.command(args)
if __name__ == '__main__':
main()

View File

@ -1,56 +1,4 @@
import os
import sys
from requests.compat import urlparse, is_windows, bytes, str
from .config import DEFAULT_CONFIG_DIR, Config
class Environment(object):
"""Holds information about the execution context.
Groups various aspects of the environment in a changeable object
and allows for mocking.
"""
#noinspection PyUnresolvedReferences
is_windows = is_windows
progname = os.path.basename(sys.argv[0])
if progname not in ['http', 'https']:
progname = 'http'
stdin_isatty = sys.stdin.isatty()
stdin = sys.stdin
stdout_isatty = sys.stdout.isatty()
config_dir = DEFAULT_CONFIG_DIR
if stdout_isatty and is_windows:
from colorama.initialise import wrap_stream
stdout = wrap_stream(sys.stdout, convert=None,
strip=None, autoreset=True, wrap=True)
else:
stdout = sys.stdout
stderr = sys.stderr
# Can be set to 0 to disable colors completely.
colors = 256 if '256color' in os.environ.get('TERM', '') else 88
def __init__(self, **kwargs):
assert all(hasattr(type(self), attr)
for attr in kwargs.keys())
self.__dict__.update(**kwargs)
@property
def config(self):
if not hasattr(self, '_config'):
self._config = Config(directory=self.config_dir)
if self._config.is_new:
self._config.save()
else:
self._config.load()
return self._config
from httpie.compat import urlsplit, str
class HTTPMessage(object):
@ -86,8 +34,8 @@ class HTTPMessage(object):
def content_type(self):
"""Return the message content type."""
ct = self._orig.headers.get('Content-Type', '')
if isinstance(ct, bytes):
ct = ct.decode()
if not isinstance(ct, str):
ct = ct.decode('utf8')
return ct
@ -100,11 +48,13 @@ class HTTPResponse(HTTPMessage):
def iter_lines(self, chunk_size):
return ((line, b'\n') for line in self._orig.iter_lines(chunk_size))
#noinspection PyProtectedMember
@property
def headers(self):
original = self._orig.raw._original_response
version = {9: '0.9', 10: '1.0', 11: '1.1'}[original.version]
status_line = 'HTTP/{version} {status} {reason}'.format(
version='.'.join(str(original.version)),
version=version,
status=original.status,
reason=original.reason
)
@ -143,40 +93,33 @@ class HTTPRequest(HTTPMessage):
@property
def headers(self):
"""Return Request-Line"""
url = urlparse(self._orig.url)
url = urlsplit(self._orig.url)
# Querystring
qs = ''
if url.query or self._orig.params:
qs = '?'
if url.query:
qs += url.query
# Requests doesn't make params part of ``request.url``.
if self._orig.params:
if url.query:
qs += '&'
#noinspection PyUnresolvedReferences
qs += type(self._orig)._encode_params(self._orig.params)
# Request-Line
request_line = '{method} {path}{query} HTTP/1.1'.format(
method=self._orig.method,
path=url.path or '/',
query=qs
query='?' + url.query if url.query else ''
)
headers = dict(self._orig.headers)
if 'Host' not in self._orig.headers:
headers['Host'] = url.netloc.split('@')[-1]
if 'Host' not in headers:
headers['Host'] = urlparse(self._orig.url).netloc
headers = ['%s: %s' % (name, value)
for name, value in headers.items()]
headers = [
'%s: %s' % (
name,
value if isinstance(value, str) else value.decode('utf8')
)
for name, value in headers.items()
]
headers.insert(0, request_line)
headers = '\r\n'.join(headers).strip()
return '\r\n'.join(headers).strip()
if isinstance(headers, bytes):
# Python < 3
headers = headers.decode('utf8')
return headers
@property
def encoding(self):
@ -184,26 +127,8 @@ class HTTPRequest(HTTPMessage):
@property
def body(self):
"""Reconstruct and return the original request body bytes."""
if self._orig.files:
# TODO: would be nice if we didn't need to encode the files again
# FIXME: Also the boundary header doesn't match the one used.
for fn, fd in self._orig.files.values():
# Rewind the files as they have already been read before.
fd.seek(0)
body, _ = self._orig._encode_files(self._orig.files)
else:
try:
body = self._orig.data
except AttributeError:
# requests < 0.12.1
body = self._orig._enc_data
if isinstance(body, dict):
#noinspection PyUnresolvedReferences
body = type(self._orig)._encode_params(body)
if isinstance(body, str):
body = body.encode('utf8')
return body
body = self._orig.body
if isinstance(body, str):
# Happens with JSON/form request data parsed from the command line.
body = body.encode('utf8')
return body or b''

View File

@ -1,496 +0,0 @@
"""Output streaming, processing and formatting.
"""
import json
from functools import partial
from itertools import chain
import pygments
from pygments import token, lexer
from pygments.styles import get_style_by_name, STYLE_MAP
from pygments.lexers import get_lexer_for_mimetype, get_lexer_by_name
from pygments.formatters.terminal import TerminalFormatter
from pygments.formatters.terminal256 import Terminal256Formatter
from pygments.util import ClassNotFound
from requests.compat import is_windows
from .solarized import Solarized256Style
from .models import HTTPRequest, HTTPResponse, Environment
from .input import (OUT_REQ_BODY, OUT_REQ_HEAD,
OUT_RESP_HEAD, OUT_RESP_BODY)
# Colors on Windows via colorama don't look that
# great and fruity seems to give the best result there.
AVAILABLE_STYLES = set(STYLE_MAP.keys())
AVAILABLE_STYLES.add('solarized')
DEFAULT_STYLE = 'solarized' if not is_windows else 'fruity'
BINARY_SUPPRESSED_NOTICE = (
b'\n'
b'+-----------------------------------------+\n'
b'| NOTE: binary data not shown in terminal |\n'
b'+-----------------------------------------+'
)
class BinarySuppressedError(Exception):
"""An error indicating that the body is binary and won't be written,
e.g., for terminal output)."""
message = BINARY_SUPPRESSED_NOTICE
###############################################################################
# Output Streams
###############################################################################
def write(stream, outfile, flush):
"""Write the output stream."""
try:
# Writing bytes so we use the buffer interface (Python 3).
buf = outfile.buffer
except AttributeError:
buf = outfile
for chunk in stream:
buf.write(chunk)
if flush:
outfile.flush()
def write_with_colors_win_p3k(stream, outfile, flush):
"""Like `write`, but colorized chunks are written as text
directly to `outfile` to ensure it gets processed by colorama.
Applies only to Windows with Python 3 and colorized terminal output.
"""
color = b'\x1b['
encoding = outfile.encoding
for chunk in stream:
if color in chunk:
outfile.write(chunk.decode(encoding))
else:
outfile.buffer.write(chunk)
if flush:
outfile.flush()
def output_stream(args, env, request, response):
"""Build and return a chain of iterators over the `request`-`response`
exchange each of which yields `bytes` chunks.
"""
Stream = make_stream(env, args)
req_h = OUT_REQ_HEAD in args.output_options
req_b = OUT_REQ_BODY in args.output_options
resp_h = OUT_RESP_HEAD in args.output_options
resp_b = OUT_RESP_BODY in args.output_options
req = req_h or req_b
resp = resp_h or resp_b
output = []
if req:
output.append(Stream(
msg=HTTPRequest(request),
with_headers=req_h,
with_body=req_b))
if req_b and resp:
# Request/Response separator.
output.append([b'\n\n'])
if resp:
output.append(Stream(
msg=HTTPResponse(response),
with_headers=resp_h,
with_body=resp_b))
if env.stdout_isatty and resp_b:
# Ensure a blank line after the response body.
# For terminal output only.
output.append([b'\n\n'])
return chain(*output)
def make_stream(env, args):
"""Pick the right stream type based on `env` and `args`.
Wrap it in a partial with the type-specific args so that
we don't need to think what stream we are dealing with.
"""
if not env.stdout_isatty and not args.prettify:
Stream = partial(
RawStream,
chunk_size=RawStream.CHUNK_SIZE_BY_LINE
if args.stream
else RawStream.CHUNK_SIZE
)
elif args.prettify:
Stream = partial(
PrettyStream if args.stream else BufferedPrettyStream,
env=env,
processor=OutputProcessor(
env=env, groups=args.prettify, pygments_style=args.style),
)
else:
Stream = partial(EncodedStream, env=env)
return Stream
class BaseStream(object):
"""Base HTTP message stream class."""
def __init__(self, msg, with_headers=True, with_body=True):
"""
:param msg: a :class:`models.HTTPMessage` subclass
:param with_headers: if `True`, headers will be included
:param with_body: if `True`, body will be included
"""
self.msg = msg
self.with_headers = with_headers
self.with_body = with_body
def _headers(self):
"""Return the headers' bytes."""
return self.msg.headers.encode('ascii')
def _body(self):
"""Return an iterator over the message body."""
raise NotImplementedError()
def __iter__(self):
"""Return an iterator over `self.msg`."""
if self.with_headers:
yield self._headers()
yield b'\r\n\r\n'
if self.with_body:
try:
for chunk in self._body():
yield chunk
except BinarySuppressedError as e:
if self.with_headers:
yield b'\n'
yield e.message
class RawStream(BaseStream):
"""The message is streamed in chunks with no processing."""
CHUNK_SIZE = 1024 * 100
CHUNK_SIZE_BY_LINE = 1024 * 5
def __init__(self, chunk_size=CHUNK_SIZE, **kwargs):
super(RawStream, self).__init__(**kwargs)
self.chunk_size = chunk_size
def _body(self):
return self.msg.iter_body(self.chunk_size)
class EncodedStream(BaseStream):
"""Encoded HTTP message stream.
The message bytes are converted to an encoding suitable for
`self.env.stdout`. Unicode errors are replaced and binary data
is suppressed. The body is always streamed by line.
"""
CHUNK_SIZE = 1024 * 5
def __init__(self, env=Environment(), **kwargs):
super(EncodedStream, self).__init__(**kwargs)
if env.stdout_isatty:
# Use the encoding supported by the terminal.
output_encoding = getattr(env.stdout, 'encoding', None)
else:
# Preserve the message encoding.
output_encoding = self.msg.encoding
# Default to utf8 when unsure.
self.output_encoding = output_encoding or 'utf8'
def _body(self):
for line, lf in self.msg.iter_lines(self.CHUNK_SIZE):
if b'\0' in line:
raise BinarySuppressedError()
yield line.decode(self.msg.encoding)\
.encode(self.output_encoding, 'replace') + lf
class PrettyStream(EncodedStream):
"""In addition to :class:`EncodedStream` behaviour, this stream applies
content processing.
Useful for long-lived HTTP responses that stream by lines
such as the Twitter streaming API.
"""
CHUNK_SIZE = 1024 * 5
def __init__(self, processor, **kwargs):
super(PrettyStream, self).__init__(**kwargs)
self.processor = processor
def _headers(self):
return self.processor.process_headers(
self.msg.headers).encode(self.output_encoding)
def _body(self):
for line, lf in self.msg.iter_lines(self.CHUNK_SIZE):
if b'\0' in line:
raise BinarySuppressedError()
yield self._process_body(line) + lf
def _process_body(self, chunk):
return (self.processor
.process_body(
chunk.decode(self.msg.encoding, 'replace'),
self.msg.content_type)
.encode(self.output_encoding, 'replace'))
class BufferedPrettyStream(PrettyStream):
"""The same as :class:`PrettyStream` except that the body is fully
fetched before it's processed.
Suitable regular HTTP responses.
"""
CHUNK_SIZE = 1024 * 10
def _body(self):
#noinspection PyArgumentList
# Read the whole body before prettifying it,
# but bail out immediately if the body is binary.
body = bytearray()
for chunk in self.msg.iter_body(self.CHUNK_SIZE):
if b'\0' in chunk:
raise BinarySuppressedError()
body.extend(chunk)
yield self._process_body(body)
###############################################################################
# Processing
###############################################################################
class HTTPLexer(lexer.RegexLexer):
"""Simplified HTTP lexer for Pygments.
It only operates on headers and provides a stronger contrast between
their names and values than the original one bundled with Pygments
(:class:`pygments.lexers.text import HttpLexer`), especially when
Solarized color scheme is used.
"""
name = 'HTTP'
aliases = ['http']
filenames = ['*.http']
tokens = {
'root': [
# Request-Line
(r'([A-Z]+)( +)([^ ]+)( +)(HTTP)(/)(\d+\.\d+)',
lexer.bygroups(
token.Name.Function,
token.Text,
token.Name.Namespace,
token.Text,
token.Keyword.Reserved,
token.Operator,
token.Number
)),
# Response Status-Line
(r'(HTTP)(/)(\d+\.\d+)( +)(\d{3})( +)(.+)',
lexer.bygroups(
token.Keyword.Reserved, # 'HTTP'
token.Operator, # '/'
token.Number, # Version
token.Text,
token.Number, # Status code
token.Text,
token.Name.Exception, # Reason
)),
# Header
(r'(.*?)( *)(:)( *)(.+)', lexer.bygroups(
token.Name.Attribute, # Name
token.Text,
token.Operator, # Colon
token.Text,
token.String # Value
))
]
}
class BaseProcessor(object):
"""Base, noop output processor class."""
enabled = True
def __init__(self, env=Environment(), **kwargs):
"""
:param env: an class:`Environment` instance
:param kwargs: additional keyword argument that some
processor might require.
"""
self.env = env
self.kwargs = kwargs
def process_headers(self, headers):
"""Return processed `headers`
:param headers: The headers as text.
"""
return headers
def process_body(self, content, content_type, subtype):
"""Return processed `content`.
:param content: The body content as text
:param content_type: Full content type, e.g., 'application/atom+xml'.
:param subtype: E.g. 'xml'.
"""
return content
class JSONProcessor(BaseProcessor):
"""JSON body processor."""
def process_body(self, content, content_type, subtype):
if subtype == 'json':
try:
# Indent the JSON data, sort keys by name, and
# avoid unicode escapes to improve readability.
content = json.dumps(json.loads(content),
sort_keys=True,
ensure_ascii=False,
indent=4)
except ValueError:
# Invalid JSON but we don't care.
pass
return content
class PygmentsProcessor(BaseProcessor):
"""A processor that applies syntax-highlighting using Pygments
to the headers, and to the body as well if its content type is recognized.
"""
def __init__(self, *args, **kwargs):
super(PygmentsProcessor, self).__init__(*args, **kwargs)
# Cache that speeds up when we process streamed body by line.
self.lexers_by_type = {}
if not self.env.colors:
self.enabled = False
return
try:
style = get_style_by_name(
self.kwargs.get('pygments_style', DEFAULT_STYLE))
except ClassNotFound:
style = Solarized256Style
if self.env.is_windows or self.env.colors == 256:
fmt_class = Terminal256Formatter
else:
fmt_class = TerminalFormatter
self.formatter = fmt_class(style=style)
def process_headers(self, headers):
return pygments.highlight(
headers, HTTPLexer(), self.formatter).strip()
def process_body(self, content, content_type, subtype):
try:
lexer = self.lexers_by_type.get(content_type)
if not lexer:
try:
lexer = get_lexer_for_mimetype(content_type)
except ClassNotFound:
lexer = get_lexer_by_name(subtype)
self.lexers_by_type[content_type] = lexer
except ClassNotFound:
pass
else:
content = pygments.highlight(content, lexer, self.formatter)
return content.strip()
class HeadersProcessor(BaseProcessor):
"""Sorts headers by name retaining relative order of multiple headers
with the same name.
"""
def process_headers(self, headers):
lines = headers.splitlines()
headers = sorted(lines[1:], key=lambda h: h.split(':')[0])
return '\r\n'.join(lines[:1] + headers)
class OutputProcessor(object):
"""A delegate class that invokes the actual processors."""
installed_processors = {
'format': [
HeadersProcessor,
JSONProcessor
],
'colors': [
PygmentsProcessor
]
}
def __init__(self, groups, env=Environment(), **kwargs):
"""
:param env: a :class:`models.Environment` instance
:param groups: the groups of processors to be applied
:param kwargs: additional keyword arguments for processors
"""
self.processors = []
for group in groups:
for cls in self.installed_processors[group]:
processor = cls(env, **kwargs)
if processor.enabled:
self.processors.append(processor)
def process_headers(self, headers):
for processor in self.processors:
headers = processor.process_headers(headers)
return headers
def process_body(self, content, content_type):
# e.g., 'application/atom+xml'
content_type = content_type.split(';')[0]
# e.g., 'xml'
subtype = content_type.split('/')[-1].split('+')[-1]
for processor in self.processors:
content = processor.process_body(content, content_type, subtype)
return content

View File

View File

View File

@ -0,0 +1,216 @@
import pygments.lexer
import pygments.token
import pygments.styles
import pygments.lexers
import pygments.style
from pygments.formatters.terminal import TerminalFormatter
from pygments.formatters.terminal256 import Terminal256Formatter
from pygments.util import ClassNotFound
from httpie.compat import is_windows
from httpie.plugins import FormatterPlugin
# Colors on Windows via colorama don't look that
# great and fruity seems to give the best result there.
AVAILABLE_STYLES = set(pygments.styles.STYLE_MAP.keys())
AVAILABLE_STYLES.add('solarized')
DEFAULT_STYLE = 'solarized' if not is_windows else 'fruity'
class ColorFormatter(FormatterPlugin):
"""
Colorize using Pygments
This processor that applies syntax highlighting to the headers,
and also to the body if its content type is recognized.
"""
group_name = 'colors'
def __init__(self, env, color_scheme=DEFAULT_STYLE, **kwargs):
super(ColorFormatter, self).__init__(**kwargs)
if not env.colors:
self.enabled = False
return
# Cache to speed things up when we process streamed body by line.
self.lexer_cache = {}
try:
style_class = pygments.styles.get_style_by_name(color_scheme)
except ClassNotFound:
style_class = Solarized256Style
if env.colors == 256:
fmt_class = Terminal256Formatter
else:
fmt_class = TerminalFormatter
self.formatter = fmt_class(style=style_class)
def format_headers(self, headers):
return pygments.highlight(headers, HTTPLexer(), self.formatter).strip()
def format_body(self, body, mime):
lexer = self.get_lexer(mime)
if lexer:
body = pygments.highlight(body, lexer, self.formatter)
return body.strip()
def get_lexer(self, mime):
if mime in self.lexer_cache:
return self.lexer_cache[mime]
self.lexer_cache[mime] = get_lexer(mime)
return self.lexer_cache[mime]
def get_lexer(mime):
mime_types, lexer_names = [mime], []
type_, subtype = mime.split('/')
if '+' not in subtype:
lexer_names.append(subtype)
else:
subtype_name, subtype_suffix = subtype.split('+')
lexer_names.extend([subtype_name, subtype_suffix])
mime_types.extend([
'%s/%s' % (type_, subtype_name),
'%s/%s' % (type_, subtype_suffix)
])
# as a last resort, if no lexer feels responsible, and
# the subtype contains 'json', take the JSON lexer
if 'json' in subtype:
lexer_names.append('json')
lexer = None
for mime_type in mime_types:
try:
lexer = pygments.lexers.get_lexer_for_mimetype(mime_type)
break
except ClassNotFound:
pass
else:
for name in lexer_names:
try:
lexer = pygments.lexers.get_lexer_by_name(name)
except ClassNotFound:
pass
return lexer
class HTTPLexer(pygments.lexer.RegexLexer):
"""Simplified HTTP lexer for Pygments.
It only operates on headers and provides a stronger contrast between
their names and values than the original one bundled with Pygments
(:class:`pygments.lexers.text import HttpLexer`), especially when
Solarized color scheme is used.
"""
name = 'HTTP'
aliases = ['http']
filenames = ['*.http']
tokens = {
'root': [
# Request-Line
(r'([A-Z]+)( +)([^ ]+)( +)(HTTP)(/)(\d+\.\d+)',
pygments.lexer.bygroups(
pygments.token.Name.Function,
pygments.token.Text,
pygments.token.Name.Namespace,
pygments.token.Text,
pygments.token.Keyword.Reserved,
pygments.token.Operator,
pygments.token.Number
)),
# Response Status-Line
(r'(HTTP)(/)(\d+\.\d+)( +)(\d{3})( +)(.+)',
pygments.lexer.bygroups(
pygments.token.Keyword.Reserved, # 'HTTP'
pygments.token.Operator, # '/'
pygments.token.Number, # Version
pygments.token.Text,
pygments.token.Number, # Status code
pygments.token.Text,
pygments.token.Name.Exception, # Reason
)),
# Header
(r'(.*?)( *)(:)( *)(.+)', pygments.lexer.bygroups(
pygments.token.Name.Attribute, # Name
pygments.token.Text,
pygments.token.Operator, # Colon
pygments.token.Text,
pygments.token.String # Value
))
]
}
class Solarized256Style(pygments.style.Style):
"""
solarized256
------------
A Pygments style inspired by Solarized's 256 color mode.
:copyright: (c) 2011 by Hank Gay, (c) 2012 by John Mastro.
:license: BSD, see LICENSE for more details.
"""
BASE03 = "#1c1c1c"
BASE02 = "#262626"
BASE01 = "#4e4e4e"
BASE00 = "#585858"
BASE0 = "#808080"
BASE1 = "#8a8a8a"
BASE2 = "#d7d7af"
BASE3 = "#ffffd7"
YELLOW = "#af8700"
ORANGE = "#d75f00"
RED = "#af0000"
MAGENTA = "#af005f"
VIOLET = "#5f5faf"
BLUE = "#0087ff"
CYAN = "#00afaf"
GREEN = "#5f8700"
background_color = BASE03
styles = {
pygments.token.Keyword: GREEN,
pygments.token.Keyword.Constant: ORANGE,
pygments.token.Keyword.Declaration: BLUE,
pygments.token.Keyword.Namespace: ORANGE,
pygments.token.Keyword.Reserved: BLUE,
pygments.token.Keyword.Type: RED,
pygments.token.Name.Attribute: BASE1,
pygments.token.Name.Builtin: BLUE,
pygments.token.Name.Builtin.Pseudo: BLUE,
pygments.token.Name.Class: BLUE,
pygments.token.Name.Constant: ORANGE,
pygments.token.Name.Decorator: BLUE,
pygments.token.Name.Entity: ORANGE,
pygments.token.Name.Exception: YELLOW,
pygments.token.Name.Function: BLUE,
pygments.token.Name.Tag: BLUE,
pygments.token.Name.Variable: BLUE,
pygments.token.String: CYAN,
pygments.token.String.Backtick: BASE01,
pygments.token.String.Char: CYAN,
pygments.token.String.Doc: CYAN,
pygments.token.String.Escape: RED,
pygments.token.String.Heredoc: CYAN,
pygments.token.String.Regex: RED,
pygments.token.Number: CYAN,
pygments.token.Operator: BASE1,
pygments.token.Operator.Word: GREEN,
pygments.token.Comment: BASE01,
pygments.token.Comment.Preproc: GREEN,
pygments.token.Comment.Special: GREEN,
pygments.token.Generic.Deleted: CYAN,
pygments.token.Generic.Emph: 'italic',
pygments.token.Generic.Error: RED,
pygments.token.Generic.Heading: ORANGE,
pygments.token.Generic.Inserted: GREEN,
pygments.token.Generic.Strong: 'bold',
pygments.token.Generic.Subheading: ORANGE,
pygments.token.Token: BASE1,
pygments.token.Token.Other: ORANGE,
}

View File

@ -0,0 +1,14 @@
from httpie.plugins import FormatterPlugin
class HeadersFormatter(FormatterPlugin):
def format_headers(self, headers):
"""
Sorts headers by name while retaining relative
order of multiple headers with the same name.
"""
lines = headers.splitlines()
headers = sorted(lines[1:], key=lambda h: h.split(':')[0])
return '\r\n'.join(lines[:1] + headers)

View File

@ -0,0 +1,26 @@
from __future__ import absolute_import
import json
from httpie.plugins import FormatterPlugin
DEFAULT_INDENT = 4
class JSONFormatter(FormatterPlugin):
def format_body(self, body, mime):
if 'json' in mime:
try:
obj = json.loads(body)
except ValueError:
# Invalid JSON, ignore.
pass
else:
# Indent, sort keys by name, and avoid
# unicode escapes to improve readability.
body = json.dumps(obj,
sort_keys=True,
ensure_ascii=False,
indent=DEFAULT_INDENT)
return body

View File

@ -0,0 +1,61 @@
from __future__ import absolute_import
import re
from xml.etree import ElementTree
from httpie.plugins import FormatterPlugin
DECLARATION_RE = re.compile('<\?xml[^\n]+?\?>', flags=re.I)
DOCTYPE_RE = re.compile('<!DOCTYPE[^\n]+?>', flags=re.I)
DEFAULT_INDENT = 4
def indent(elem, indent_text=' ' * DEFAULT_INDENT):
"""
In-place prettyprint formatter
C.f. http://effbot.org/zone/element-lib.htm#prettyprint
"""
def _indent(elem, level=0):
i = "\n" + level * indent_text
if len(elem):
if not elem.text or not elem.text.strip():
elem.text = i + indent_text
if not elem.tail or not elem.tail.strip():
elem.tail = i
for elem in elem:
_indent(elem, level + 1)
if not elem.tail or not elem.tail.strip():
elem.tail = i
else:
if level and (not elem.tail or not elem.tail.strip()):
elem.tail = i
return _indent(elem)
class XMLFormatter(FormatterPlugin):
# TODO: tests
def format_body(self, body, mime):
if 'xml' in mime:
# FIXME: orig NS names get forgotten during the conversion, etc.
try:
root = ElementTree.fromstring(body.encode('utf8'))
except ElementTree.ParseError:
# Ignore invalid XML errors (skips attempting to pretty print)
pass
else:
indent(root)
# Use the original declaration
declaration = DECLARATION_RE.match(body)
doctype = DOCTYPE_RE.match(body)
body = ElementTree.tostring(root, encoding='utf-8')\
.decode('utf8')
if doctype:
body = '%s\n%s' % (doctype.group(0), body)
if declaration:
body = '%s\n%s' % (declaration.group(0), body)
return body

View File

@ -0,0 +1,50 @@
import re
from httpie.plugins import plugin_manager
from httpie.context import Environment
MIME_RE = re.compile(r'^[^/]+/[^/]+$')
def is_valid_mime(mime):
return mime and MIME_RE.match(mime)
class Conversion(object):
def get_converter(self, mime):
if is_valid_mime(mime):
for converter_class in plugin_manager.get_converters():
if converter_class.supports(mime):
return converter_class(mime)
class Formatting(object):
"""A delegate class that invokes the actual processors."""
def __init__(self, groups, env=Environment(), **kwargs):
"""
:param groups: names of processor groups to be applied
:param env: Environment
:param kwargs: additional keyword arguments for processors
"""
available_plugins = plugin_manager.get_formatters_grouped()
self.enabled_plugins = []
for group in groups:
for cls in available_plugins[group]:
p = cls(env=env, **kwargs)
if p.enabled:
self.enabled_plugins.append(p)
def format_headers(self, headers):
for p in self.enabled_plugins:
headers = p.format_headers(headers)
return headers
def format_body(self, content, mime):
if is_valid_mime(mime):
for p in self.enabled_plugins:
content = p.format_body(content, mime)
return content

293
httpie/output/streams.py Normal file
View File

@ -0,0 +1,293 @@
from itertools import chain
from functools import partial
from httpie.compat import str
from httpie.context import Environment
from httpie.models import HTTPRequest, HTTPResponse
from httpie.input import (OUT_REQ_BODY, OUT_REQ_HEAD,
OUT_RESP_HEAD, OUT_RESP_BODY)
from httpie.output.processing import Formatting, Conversion
BINARY_SUPPRESSED_NOTICE = (
b'\n'
b'+-----------------------------------------+\n'
b'| NOTE: binary data not shown in terminal |\n'
b'+-----------------------------------------+'
)
class BinarySuppressedError(Exception):
"""An error indicating that the body is binary and won't be written,
e.g., for terminal output)."""
message = BINARY_SUPPRESSED_NOTICE
def write(stream, outfile, flush):
"""Write the output stream."""
try:
# Writing bytes so we use the buffer interface (Python 3).
buf = outfile.buffer
except AttributeError:
buf = outfile
for chunk in stream:
buf.write(chunk)
if flush:
outfile.flush()
def write_with_colors_win_py3(stream, outfile, flush):
"""Like `write`, but colorized chunks are written as text
directly to `outfile` to ensure it gets processed by colorama.
Applies only to Windows with Python 3 and colorized terminal output.
"""
color = b'\x1b['
encoding = outfile.encoding
for chunk in stream:
if color in chunk:
outfile.write(chunk.decode(encoding))
else:
outfile.buffer.write(chunk)
if flush:
outfile.flush()
def build_output_stream(args, env, request, response):
"""Build and return a chain of iterators over the `request`-`response`
exchange each of which yields `bytes` chunks.
"""
req_h = OUT_REQ_HEAD in args.output_options
req_b = OUT_REQ_BODY in args.output_options
resp_h = OUT_RESP_HEAD in args.output_options
resp_b = OUT_RESP_BODY in args.output_options
req = req_h or req_b
resp = resp_h or resp_b
output = []
Stream = get_stream_type(env, args)
if req:
output.append(Stream(
msg=HTTPRequest(request),
with_headers=req_h,
with_body=req_b))
if req_b and resp:
# Request/Response separator.
output.append([b'\n\n'])
if resp:
output.append(Stream(
msg=HTTPResponse(response),
with_headers=resp_h,
with_body=resp_b))
if env.stdout_isatty and resp_b:
# Ensure a blank line after the response body.
# For terminal output only.
output.append([b'\n\n'])
return chain(*output)
def get_stream_type(env, args):
"""Pick the right stream type based on `env` and `args`.
Wrap it in a partial with the type-specific args so that
we don't need to think what stream we are dealing with.
"""
if not env.stdout_isatty and not args.prettify:
Stream = partial(
RawStream,
chunk_size=RawStream.CHUNK_SIZE_BY_LINE
if args.stream
else RawStream.CHUNK_SIZE
)
elif args.prettify:
Stream = partial(
PrettyStream if args.stream else BufferedPrettyStream,
env=env,
conversion=Conversion(),
formatting=Formatting(env=env, groups=args.prettify,
color_scheme=args.style),
)
else:
Stream = partial(EncodedStream, env=env)
return Stream
class BaseStream(object):
"""Base HTTP message output stream class."""
def __init__(self, msg, with_headers=True, with_body=True,
on_body_chunk_downloaded=None):
"""
:param msg: a :class:`models.HTTPMessage` subclass
:param with_headers: if `True`, headers will be included
:param with_body: if `True`, body will be included
"""
assert with_headers or with_body
self.msg = msg
self.with_headers = with_headers
self.with_body = with_body
self.on_body_chunk_downloaded = on_body_chunk_downloaded
def get_headers(self):
"""Return the headers' bytes."""
return self.msg.headers.encode('utf8')
def iter_body(self):
"""Return an iterator over the message body."""
raise NotImplementedError()
def __iter__(self):
"""Return an iterator over `self.msg`."""
if self.with_headers:
yield self.get_headers()
yield b'\r\n\r\n'
if self.with_body:
try:
for chunk in self.iter_body():
yield chunk
if self.on_body_chunk_downloaded:
self.on_body_chunk_downloaded(chunk)
except BinarySuppressedError as e:
if self.with_headers:
yield b'\n'
yield e.message
class RawStream(BaseStream):
"""The message is streamed in chunks with no processing."""
CHUNK_SIZE = 1024 * 100
CHUNK_SIZE_BY_LINE = 1
def __init__(self, chunk_size=CHUNK_SIZE, **kwargs):
super(RawStream, self).__init__(**kwargs)
self.chunk_size = chunk_size
def iter_body(self):
return self.msg.iter_body(self.chunk_size)
class EncodedStream(BaseStream):
"""Encoded HTTP message stream.
The message bytes are converted to an encoding suitable for
`self.env.stdout`. Unicode errors are replaced and binary data
is suppressed. The body is always streamed by line.
"""
CHUNK_SIZE = 1
def __init__(self, env=Environment(), **kwargs):
super(EncodedStream, self).__init__(**kwargs)
if env.stdout_isatty:
# Use the encoding supported by the terminal.
output_encoding = env.stdout_encoding
else:
# Preserve the message encoding.
output_encoding = self.msg.encoding
# Default to utf8 when unsure.
self.output_encoding = output_encoding or 'utf8'
def iter_body(self):
for line, lf in self.msg.iter_lines(self.CHUNK_SIZE):
if b'\0' in line:
raise BinarySuppressedError()
yield line.decode(self.msg.encoding) \
.encode(self.output_encoding, 'replace') + lf
class PrettyStream(EncodedStream):
"""In addition to :class:`EncodedStream` behaviour, this stream applies
content processing.
Useful for long-lived HTTP responses that stream by lines
such as the Twitter streaming API.
"""
CHUNK_SIZE = 1
def __init__(self, conversion, formatting, **kwargs):
super(PrettyStream, self).__init__(**kwargs)
self.formatting = formatting
self.conversion = conversion
self.mime = self.msg.content_type.split(';')[0]
def get_headers(self):
return self.formatting.format_headers(
self.msg.headers).encode(self.output_encoding)
def iter_body(self):
first_chunk = True
iter_lines = self.msg.iter_lines(self.CHUNK_SIZE)
for line, lf in iter_lines:
if b'\0' in line:
if first_chunk:
converter = self.conversion.get_converter(self.mime)
if converter:
body = bytearray()
# noinspection PyAssignmentToLoopOrWithParameter
for line, lf in chain([(line, lf)], iter_lines):
body.extend(line)
body.extend(lf)
self.mime, body = converter.convert(body)
assert isinstance(body, str)
yield self.process_body(body)
return
raise BinarySuppressedError()
yield self.process_body(line) + lf
first_chunk = False
def process_body(self, chunk):
if not isinstance(chunk, str):
# Text when a converter has been used,
# otherwise it will always be bytes.
chunk = chunk.decode(self.msg.encoding, 'replace')
chunk = self.formatting.format_body(content=chunk, mime=self.mime)
return chunk.encode(self.output_encoding, 'replace')
class BufferedPrettyStream(PrettyStream):
"""The same as :class:`PrettyStream` except that the body is fully
fetched before it's processed.
Suitable regular HTTP responses.
"""
CHUNK_SIZE = 1024 * 10
def iter_body(self):
# Read the whole body before prettifying it,
# but bail out immediately if the body is binary.
converter = None
body = bytearray()
for chunk in self.msg.iter_body(self.CHUNK_SIZE):
if not converter and b'\0' in chunk:
converter = self.conversion.get_converter(self.mime)
if not converter:
raise BinarySuppressedError()
body.extend(chunk)
if converter:
self.mime, body = converter.convert(body)
yield self.process_body(body)

View File

@ -0,0 +1,16 @@
from httpie.plugins.base import AuthPlugin, FormatterPlugin, ConverterPlugin
from httpie.plugins.manager import PluginManager
from httpie.plugins.builtin import BasicAuthPlugin, DigestAuthPlugin
from httpie.output.formatters.headers import HeadersFormatter
from httpie.output.formatters.json import JSONFormatter
from httpie.output.formatters.xml import XMLFormatter
from httpie.output.formatters.colors import ColorFormatter
plugin_manager = PluginManager()
plugin_manager.register(BasicAuthPlugin,
DigestAuthPlugin)
plugin_manager.register(HeadersFormatter,
JSONFormatter,
XMLFormatter,
ColorFormatter)

73
httpie/plugins/base.py Normal file
View File

@ -0,0 +1,73 @@
class BasePlugin(object):
# The name of the plugin, eg. "My auth".
name = None
# Optional short description. Will be be shown in the help
# under --auth-type.
description = None
# This be set automatically once the plugin has been loaded.
package_name = None
class AuthPlugin(BasePlugin):
"""
Base auth plugin class.
See <https://github.com/jkbr/httpie-ntlm> for an example auth plugin.
"""
# The value that should be passed to --auth-type
# to use this auth plugin. Eg. "my-auth"
auth_type = None
def get_auth(self, username, password):
"""
Return a ``requests.auth.AuthBase`` subclass instance.
"""
raise NotImplementedError()
class ConverterPlugin(object):
def __init__(self, mime):
self.mime = mime
def convert(self, content_bytes):
raise NotImplementedError
@classmethod
def supports(cls, mime):
raise NotImplementedError
class FormatterPlugin(object):
def __init__(self, **kwargs):
"""
:param env: an class:`Environment` instance
:param kwargs: additional keyword argument that some
processor might require.
"""
self.enabled = True
self.kwargs = kwargs
def format_headers(self, headers):
"""Return processed `headers`
:param headers: The headers as text.
"""
return headers
def format_body(self, content, mime):
"""Return processed `content`.
:param mime: E.g., 'application/atom+xml'.
:param content: The body content as text
"""
return content

48
httpie/plugins/builtin.py Normal file
View File

@ -0,0 +1,48 @@
from base64 import b64encode
import requests.auth
from httpie.plugins.base import AuthPlugin
class BuiltinAuthPlugin(AuthPlugin):
package_name = '(builtin)'
class HTTPBasicAuth(requests.auth.HTTPBasicAuth):
def __call__(self, r):
"""
Override username/password serialization to allow unicode.
See https://github.com/jakubroztocil/httpie/issues/212
"""
r.headers['Authorization'] = type(self).make_header(
self.username, self.password).encode('latin1')
return r
@staticmethod
def make_header(username, password):
credentials = u'%s:%s' % (username, password)
token = b64encode(credentials.encode('utf8')).strip().decode('latin1')
return 'Basic %s' % token
class BasicAuthPlugin(BuiltinAuthPlugin):
name = 'Basic HTTP auth'
auth_type = 'basic'
def get_auth(self, username, password):
return HTTPBasicAuth(username, password)
class DigestAuthPlugin(BuiltinAuthPlugin):
name = 'Digest HTTP auth'
auth_type = 'digest'
def get_auth(self, username, password):
return requests.auth.HTTPDigestAuth(username, password)

58
httpie/plugins/manager.py Normal file
View File

@ -0,0 +1,58 @@
from itertools import groupby
from pkg_resources import iter_entry_points
from httpie.plugins import AuthPlugin, FormatterPlugin, ConverterPlugin
ENTRY_POINT_NAMES = [
'httpie.plugins.auth.v1',
'httpie.plugins.formatter.v1',
'httpie.plugins.converter.v1',
]
class PluginManager(object):
def __init__(self):
self._plugins = []
def __iter__(self):
return iter(self._plugins)
def register(self, *plugins):
for plugin in plugins:
self._plugins.append(plugin)
def load_installed_plugins(self):
for entry_point_name in ENTRY_POINT_NAMES:
for entry_point in iter_entry_points(entry_point_name):
plugin = entry_point.load()
plugin.package_name = entry_point.dist.key
self.register(entry_point.load())
# Auth
def get_auth_plugins(self):
return [plugin for plugin in self if issubclass(plugin, AuthPlugin)]
def get_auth_plugin_mapping(self):
return dict((plugin.auth_type, plugin)
for plugin in self.get_auth_plugins())
def get_auth_plugin(self, auth_type):
return self.get_auth_plugin_mapping()[auth_type]
# Output processing
def get_formatters(self):
return [plugin for plugin in self
if issubclass(plugin, FormatterPlugin)]
def get_formatters_grouped(self):
groups = {}
for group_name, group in groupby(
self.get_formatters(),
key=lambda p: getattr(p, 'group_name', 'format')):
groups[group_name] = list(group)
return groups
def get_converters(self):
return [plugin for plugin in self
if issubclass(plugin, ConverterPlugin)]

View File

@ -1,115 +1,118 @@
"""Persistent, JSON-serialized sessions.
"""
import re
import os
import sys
import glob
import errno
import codecs
import shutil
import subprocess
import requests
from requests.compat import urlparse
from requests.cookies import RequestsCookieJar, create_cookie
from requests.auth import HTTPBasicAuth, HTTPDigestAuth
from argparse import OPTIONAL
from .config import BaseConfigDict, DEFAULT_CONFIG_DIR
from .output import PygmentsProcessor
from httpie.compat import urlsplit
from httpie.config import BaseConfigDict, DEFAULT_CONFIG_DIR
from httpie.plugins import plugin_manager
SESSIONS_DIR_NAME = 'sessions'
DEFAULT_SESSIONS_DIR = os.path.join(DEFAULT_CONFIG_DIR, SESSIONS_DIR_NAME)
VALID_SESSION_NAME_PATTERN = re.compile('^[a-zA-Z0-9_.-]+$')
# Request headers starting with these prefixes won't be stored in sessions.
# They are specific to each request.
# http://en.wikipedia.org/wiki/List_of_HTTP_header_fields#Requests
SESSION_IGNORED_HEADER_PREFIXES = ['Content-', 'If-']
def get_response(name, request_kwargs, config_dir, read_only=False):
def get_response(session_name, config_dir, args, read_only=False):
"""Like `client.get_response`, but applies permanent
aspects of the session to the request.
"""
sessions_dir = os.path.join(config_dir, SESSIONS_DIR_NAME)
host = Host(
root_dir=sessions_dir,
name=request_kwargs['headers'].get('Host', None)
or urlparse(request_kwargs['url']).netloc.split('@')[-1]
)
from .client import get_requests_kwargs, dump_request
if os.path.sep in session_name:
path = os.path.expanduser(session_name)
else:
hostname = (args.headers.get('Host', None)
or urlsplit(args.url).netloc.split('@')[-1])
assert re.match('^[a-zA-Z0-9_.:-]+$', hostname)
session = Session(host, name)
# host:port => host_port
hostname = hostname.replace(':', '_')
path = os.path.join(config_dir,
SESSIONS_DIR_NAME,
hostname,
session_name + '.json')
session = Session(path)
session.load()
# Update session headers with the request headers.
session['headers'].update(request_kwargs.get('headers', {}))
# Use the merged headers for the request
request_kwargs['headers'] = session['headers']
requests_kwargs = get_requests_kwargs(args, base_headers=session.headers)
if args.debug:
dump_request(requests_kwargs)
session.update_headers(requests_kwargs['headers'])
auth = request_kwargs.get('auth', None)
if auth:
session.auth = auth
if args.auth:
session.auth = {
'type': args.auth_type,
'username': args.auth.key,
'password': args.auth.value,
}
elif session.auth:
request_kwargs['auth'] = session.auth
requests_kwargs['auth'] = session.auth
requests_session = requests.Session()
requests_session.cookies = session.cookies
rsession = requests.Session(cookies=session.cookies)
try:
response = rsession.request(**request_kwargs)
response = requests_session.request(**requests_kwargs)
except Exception:
raise
else:
# Existing sessions with `read_only=True` don't get updated.
if session.is_new or not read_only:
session.cookies = rsession.cookies
if session.is_new() or not read_only:
session.cookies = requests_session.cookies
session.save()
return response
class Host(object):
"""A host is a per-host directory on the disk containing sessions files."""
def __init__(self, name, root_dir=DEFAULT_CONFIG_DIR):
self.name = name
self.root_dir = root_dir
def __iter__(self):
"""Return a iterator yielding `(session_name, session_path)`."""
for fn in sorted(glob.glob1(self.path, '*.json')):
yield os.path.splitext(fn)[0], os.path.join(self.path, fn)
def delete(self):
shutil.rmtree(self.path)
@property
def path(self):
# Name will include ':' if a port is specified, which is invalid
# on windows. DNS does not allow '_' in a domain, or for it to end
# in a number (I think?)
path = os.path.join(self.root_dir, self.name.replace(':', '_'))
try:
os.makedirs(path, mode=0o700)
except OSError as e:
if e.errno != errno.EEXIST:
raise
return path
@classmethod
def all(cls):
"""Return a generator yielding a host at a time."""
for name in sorted(glob.glob1(SESSIONS_DIR, '*')):
if os.path.isdir(os.path.join(SESSIONS_DIR, name)):
yield Host(name)
class Session(BaseConfigDict):
""""""
helpurl = 'https://github.com/jakubroztocil/httpie#sessions'
about = 'HTTPie session file'
def __init__(self, host, name, *args, **kwargs):
def __init__(self, path, *args, **kwargs):
super(Session, self).__init__(*args, **kwargs)
self.host = host
self.name = name
self._path = path
self['headers'] = {}
self['cookies'] = {}
self['auth'] = {
'type': None,
'username': None,
'password': None
}
def _get_path(self):
return self._path
def update_headers(self, request_headers):
"""
Update the session headers with the request ones while ignoring
certain name prefixes.
:type request_headers: dict
"""
for name, value in request_headers.items():
value = value.decode('utf8')
if name == 'User-Agent' and value.startswith('HTTPie/'):
continue
for prefix in SESSION_IGNORED_HEADER_PREFIXES:
if name.lower().startswith(prefix.lower()):
break
else:
self['headers'][name] = value
@property
def directory(self):
return self.host.path
def headers(self):
return self['headers']
@property
def cookies(self):
@ -122,108 +125,27 @@ class Session(BaseConfigDict):
@cookies.setter
def cookies(self, jar):
excluded = [
'_rest', 'name', 'port_specified',
'domain_specified', 'domain_initial_dot',
'path_specified', 'comment', 'comment_url'
]
"""
:type jar: CookieJar
"""
# http://docs.python.org/2/library/cookielib.html#cookie-objects
stored_attrs = ['value', 'path', 'secure', 'expires']
self['cookies'] = {}
for host in jar._cookies.values():
for path in host.values():
for name, cookie in path.items():
cookie_dict = {}
for k, v in cookie.__dict__.items():
if k not in excluded:
cookie_dict[k] = v
self['cookies'][name] = cookie_dict
for cookie in jar:
self['cookies'][cookie.name] = dict(
(attname, getattr(cookie, attname))
for attname in stored_attrs
)
@property
def auth(self):
auth = self.get('auth', None)
if not auth:
return None
Auth = {'basic': HTTPBasicAuth,
'digest': HTTPDigestAuth}[auth['type']]
return Auth(auth['username'], auth['password'])
if not auth or not auth['type']:
return
auth_plugin = plugin_manager.get_auth_plugin(auth['type'])()
return auth_plugin.get_auth(auth['username'], auth['password'])
@auth.setter
def auth(self, cred):
self['auth'] = {
'type': {HTTPBasicAuth: 'basic',
HTTPDigestAuth: 'digest'}[type(cred)],
'username': cred.username,
'password': cred.password,
}
def list_command(args):
if args.host:
for name, path in Host(args.host):
print(name + ' [' + path + ']')
else:
for host in Host.all():
print(host.name)
for name, path in host:
print(' ' + name + ' [' + path + ']')
def show_command(args):
path = Session(Host(args.host), args.name).path
if not os.path.exists(path):
sys.stderr.write('Session "%s" does not exist [%s].\n'
% (args.name, path))
sys.exit(1)
with codecs.open(path, encoding='utf8') as f:
print(path + ':\n')
proc = PygmentsProcessor()
print(proc.process_body(f.read(), 'application/json', 'json'))
print('')
def delete_command(args):
host = Host(args.host)
if not args.name:
host.delete()
else:
Session(host, args.name).delete()
def edit_command(args):
editor = os.environ.get('EDITOR', None)
if not editor:
sys.stderr.write(
'You need to configure the environment variable EDITOR.\n')
sys.exit(1)
command = editor.split()
command.append(Session(Host(args.host), args.name).path)
subprocess.call(command)
def add_commands(subparsers):
# List
list_ = subparsers.add_parser('session-list', help='list sessions')
list_.set_defaults(command=list_command)
list_.add_argument('host', nargs=OPTIONAL)
# Show
show = subparsers.add_parser('session-show', help='show a session')
show.set_defaults(command=show_command)
show.add_argument('host')
show.add_argument('name')
# Edit
edit = subparsers.add_parser(
'session-edit', help='edit a session in $EDITOR')
edit.set_defaults(command=edit_command)
edit.add_argument('host')
edit.add_argument('name')
# Delete
delete = subparsers.add_parser('session-delete', help='delete a session')
delete.set_defaults(command=delete_command)
delete.add_argument('host')
delete.add_argument('name', nargs=OPTIONAL,
help='The name of the session to be deleted.'
' If not specified, all host sessions are deleted.')
def auth(self, auth):
assert set(['type', 'username', 'password']) == set(auth.keys())
self['auth'] = auth

View File

@ -1,111 +0,0 @@
# -*- coding: utf-8 -*-
"""
solarized256
------------
A Pygments style inspired by Solarized's 256 color mode.
:copyright: (c) 2011 by Hank Gay, (c) 2012 by John Mastro.
:license: BSD, see LICENSE for more details.
"""
from pygments.style import Style
from pygments.token import Token, Comment, Name, Keyword, Generic, Number, \
Operator, String
BASE03 = "#1c1c1c"
BASE02 = "#262626"
BASE01 = "#4e4e4e"
BASE00 = "#585858"
BASE0 = "#808080"
BASE1 = "#8a8a8a"
BASE2 = "#d7d7af"
BASE3 = "#ffffd7"
YELLOW = "#af8700"
ORANGE = "#d75f00"
RED = "#af0000"
MAGENTA = "#af005f"
VIOLET = "#5f5faf"
BLUE = "#0087ff"
CYAN = "#00afaf"
GREEN = "#5f8700"
class Solarized256Style(Style):
background_color = BASE03
styles = {
Keyword: GREEN,
Keyword.Constant: ORANGE,
Keyword.Declaration: BLUE,
Keyword.Namespace: ORANGE,
#Keyword.Pseudo
Keyword.Reserved: BLUE,
Keyword.Type: RED,
#Name
Name.Attribute: BASE1,
Name.Builtin: BLUE,
Name.Builtin.Pseudo: BLUE,
Name.Class: BLUE,
Name.Constant: ORANGE,
Name.Decorator: BLUE,
Name.Entity: ORANGE,
Name.Exception: YELLOW,
Name.Function: BLUE,
#Name.Label
#Name.Namespace
#Name.Other
Name.Tag: BLUE,
Name.Variable: BLUE,
#Name.Variable.Class
#Name.Variable.Global
#Name.Variable.Instance
#Literal
#Literal.Date
String: CYAN,
String.Backtick: BASE01,
String.Char: CYAN,
String.Doc: CYAN,
#String.Double
String.Escape: RED,
String.Heredoc: CYAN,
#String.Interpol
#String.Other
String.Regex: RED,
#String.Single
#String.Symbol
Number: CYAN,
#Number.Float
#Number.Hex
#Number.Integer
#Number.Integer.Long
#Number.Oct
Operator: BASE1,
Operator.Word: GREEN,
#Punctuation: ORANGE,
Comment: BASE01,
#Comment.Multiline
Comment.Preproc: GREEN,
#Comment.Single
Comment.Special: GREEN,
#Generic
Generic.Deleted: CYAN,
Generic.Emph: 'italic',
Generic.Error: RED,
Generic.Heading: ORANGE,
Generic.Inserted: GREEN,
#Generic.Output
#Generic.Prompt
Generic.Strong: 'bold',
Generic.Subheading: ORANGE,
#Generic.Traceback
Token: BASE1,
Token.Other: ORANGE,
}

57
httpie/utils.py Normal file
View File

@ -0,0 +1,57 @@
from __future__ import division
import json
from httpie.compat import is_py26, OrderedDict
def load_json_preserve_order(s):
if is_py26:
return json.loads(s)
return json.loads(s, object_pairs_hook=OrderedDict)
def humanize_bytes(n, precision=2):
# Author: Doug Latornell
# Licence: MIT
# URL: http://code.activestate.com/recipes/577081/
"""Return a humanized string representation of a number of bytes.
Assumes `from __future__ import division`.
>>> humanize_bytes(1)
'1 B'
>>> humanize_bytes(1024, precision=1)
'1.0 kB'
>>> humanize_bytes(1024 * 123, precision=1)
'123.0 kB'
>>> humanize_bytes(1024 * 12342, precision=1)
'12.1 MB'
>>> humanize_bytes(1024 * 12342, precision=2)
'12.05 MB'
>>> humanize_bytes(1024 * 1234, precision=2)
'1.21 MB'
>>> humanize_bytes(1024 * 1234 * 1111, precision=2)
'1.31 GB'
>>> humanize_bytes(1024 * 1234 * 1111, precision=1)
'1.3 GB'
"""
abbrevs = [
(1 << 50, 'PB'),
(1 << 40, 'TB'),
(1 << 30, 'GB'),
(1 << 20, 'MB'),
(1 << 10, 'kB'),
(1, 'B')
]
if n == 1:
return '1 B'
for factor, suffix in abbrevs:
if n >= factor:
break
# noinspection PyUnboundLocalVariable
return '%.*f %s' % (precision, n / factor, suffix)

6
requirements-dev.txt Normal file
View File

@ -0,0 +1,6 @@
tox
pytest
pytest-cov
pytest-httpbin
docutils
wheel

2
setup.cfg Normal file
View File

@ -0,0 +1,2 @@
[wheel]
universal = 1

View File

@ -1,37 +1,72 @@
import os
# This is purely the result of trial and error.
import sys
import re
import codecs
from setuptools import setup
from setuptools import setup, find_packages
from setuptools.command.test import test as TestCommand
import httpie
if sys.argv[-1] == 'test':
status = os.system('python tests/tests.py')
sys.exit(1 if status > 127 else status)
class PyTest(TestCommand):
# `$ python setup.py test' simply installs minimal requirements
# and runs the tests with no fancy stuff like parallel execution.
def finalize_options(self):
TestCommand.finalize_options(self)
self.test_suite = True
self.test_args = [
'--doctest-modules', '--verbose',
'./httpie', './tests'
]
self.test_suite = True
def run_tests(self):
import pytest
sys.exit(pytest.main(self.test_args))
requirements = [
# Debian has only requests==0.10.1 and httpie.deb depends on that.
'requests>=0.10.1',
tests_require = [
# Pytest needs to come last.
# https://bitbucket.org/pypa/setuptools/issue/196/
'pytest-httpbin',
'pytest',
]
install_requires = [
'requests>=2.3.0',
'Pygments>=1.5'
]
if sys.version_info[:2] in ((2, 6), (3, 1)):
# argparse has been added in Python 3.2 / 2.7
requirements.append('argparse>=1.2.1')
if 'win32' in str(sys.platform).lower():
# Terminal colors for Windows
requirements.append('colorama>=0.2.4')
### Conditional dependencies:
# sdist
if not 'bdist_wheel' in sys.argv:
try:
#noinspection PyUnresolvedReferences
import argparse
except ImportError:
install_requires.append('argparse>=1.2.1')
if 'win32' in str(sys.platform).lower():
# Terminal colors for Windows
install_requires.append('colorama>=0.2.4')
# bdist_wheel
extras_require = {
# http://wheel.readthedocs.org/en/latest/#defining-conditional-dependencies
':python_version == "2.6"'
' or python_version == "3.0"'
' or python_version == "3.1" ': ['argparse>=1.2.1'],
':sys_platform == "win32"': ['colorama>=0.2.4'],
}
def long_description():
"""Pre-process the README so that PyPi can render it properly."""
with codecs.open('README.rst', encoding='utf8') as f:
rst = f.read()
code_block = '(:\n\n)?\.\. code-block::.*'
rst = re.sub(code_block, '::', rst)
return rst
return f.read()
setup(
name='httpie',
@ -39,25 +74,31 @@ setup(
description=httpie.__doc__.strip(),
long_description=long_description(),
url='http://httpie.org/',
download_url='https://github.com/jkbr/httpie',
download_url='https://github.com/jakubroztocil/httpie',
author=httpie.__author__,
author_email='jakub@roztocil.name',
license=httpie.__licence__,
packages=['httpie'],
packages=find_packages(),
entry_points={
'console_scripts': [
'http = httpie.__main__:main',
'httpie = httpie.manage:main',
],
},
install_requires=requirements,
extras_require=extras_require,
install_requires=install_requires,
tests_require=tests_require,
cmdclass={'test': PyTest},
classifiers=[
'Development Status :: 5 - Production/Stable',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.1',
'Programming Language :: Python :: 3.2',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Environment :: Console',
'Intended Audience :: Developers',
'Intended Audience :: System Administrators',

8
tests/README.rst Normal file
View File

@ -0,0 +1,8 @@
HTTPie Test Suite
=================
Please see `CONTRIBUTING`_.
.. _CONTRIBUTING: https://github.com/jakubroztocil/httpie/blob/master/CONTRIBUTING.rst

View File

@ -0,0 +1,29 @@
-----BEGIN CERTIFICATE-----
MIIFAjCCAuoCAQEwDQYJKoZIhvcNAQEFBQAwSTELMAkGA1UEBhMCVVMxCzAJBgNV
BAgTAkNBMQswCQYDVQQHEwJTRjEPMA0GA1UEChMGSFRUUGllMQ8wDQYDVQQDEwZI
VFRQaWUwHhcNMTUwMTIzMjIyNTM2WhcNMTYwMTIzMjIyNTM2WjBFMQswCQYDVQQG
EwJBVTETMBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lk
Z2l0cyBQdHkgTHRkMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu6aP
iR3TpPESWKTS969fxNRoSxl8P4osjhIaUuwblFNZc8/Rn5mCMKmD506JrFV8fktQ
M6JRL7QuDC9vCw0ycr2HCV1sYX/ICgPCXYgmyigH535lb9V9hHjAgy60QgJBgSE7
lmMYaPpX6OKbT7UlzSwYtfHomXEBFA18Rlc9GwMXH8Et0RQiWIi7S6vpDRpZFxRi
gtXMceK1X8kut2ODv9B5ZwiuXh7+AMSCUkO58bXJTewQI6JadczU0JyVVjJVTny3
ta0x4SyXn8/ibylOalIsmTd/CAXJRfhV0Umb34LwaWrZ2nc+OrJwYLvOp1cG/zYl
GHkFCViRfuwwSkL4iKjVeHx2o0DxJ4bc2Z7k1ig2fTJK3gEbMz3y+YTlVNPo108H
JI77DPbkBUqLPeF7PMaN/zDqmdH0yNCW+WiHZlf6h7kZdVH3feAhTfDZbpSxhpRo
Ja84OAVCNqAuNjnZs8pMIW/iRixwP8p84At7VsS4yQQFTCjN22UhPP0PrqY3ngEj
1lbfhHC1FNZvCMxrkUAUQbeYRqLrIwB4KdDMkRJixv5Vr89NO08QtnLwQduusVkc
4Zg9HXtJTKjgQTHxHtn+OrTbpx0ogaUuYpVcQOsBT3b0EyV2z6pZiH6HK1r5Xwaq
0+nvFwpCHe58PlaI3Geihxejkv+85ZgDqXSGt7ECAwEAATANBgkqhkiG9w0BAQUF
AAOCAgEAQgIicN/uWtaYYBVEVeMGMdxzpp2pv3AaCfQMoVGaQu9VLydK/GBlYOqj
AGPjdmQ7p4ISlduXqslu646+RxZ+H6TSSj0NTF4FyR8LPckRPiePNlsGp3u6ffix
PX0554Ks+JYyFJ7qyMhsilqCYtw8prX9lj8fjzbWWXlgJFH/SRZw4xdcJ1yYA9sQ
fBHxveCWFS1ibX5+QGy/+7jPb99MP38HEIt9vTMW5aiwXeIbipXohWqcJhxL9GXz
KPsrt9a++rLjqsquhZL4uCksGmI4Gv0FQQswgSyHSSQzagee5VRB68WYSAyYdvzi
YCfkNcbQtOOQWGx4rsEdENViPs1GEZkWJJ1h9pmWzZl0U9c3cnABffK7o9v6ap2F
NrnU5H/7jLuBiUJFzqwkgAjANLRZ6hLj6h/grcnIIThJwg6KaXvpEh4UkHuqHYBF
Fq1BWZIWU25ASggEVIsCPXC2+I1oGhxK1DN/J+wIht9MBWWlQWVMZAQsBkszNZrh
nzdfMoQZTG5bT4Bf0bI5LmPaY0xBxXA1f4TLuqrEAziOjRX3vIQV4i33nZZJvPcC
mCoyhAUpTJm+OI90ePll+vBO1ENAx7EMHqNe6eCChZ/9DUsVxxtaorVq1l0xWons
ynOCgx46hGE12/oiRIKq/wGMpv6ClfJhW1N5nJahDqoIMEvnNaQ=
-----END CERTIFICATE-----

View File

@ -0,0 +1,51 @@
-----BEGIN RSA PRIVATE KEY-----
MIIJKAIBAAKCAgEAu6aPiR3TpPESWKTS969fxNRoSxl8P4osjhIaUuwblFNZc8/R
n5mCMKmD506JrFV8fktQM6JRL7QuDC9vCw0ycr2HCV1sYX/ICgPCXYgmyigH535l
b9V9hHjAgy60QgJBgSE7lmMYaPpX6OKbT7UlzSwYtfHomXEBFA18Rlc9GwMXH8Et
0RQiWIi7S6vpDRpZFxRigtXMceK1X8kut2ODv9B5ZwiuXh7+AMSCUkO58bXJTewQ
I6JadczU0JyVVjJVTny3ta0x4SyXn8/ibylOalIsmTd/CAXJRfhV0Umb34LwaWrZ
2nc+OrJwYLvOp1cG/zYlGHkFCViRfuwwSkL4iKjVeHx2o0DxJ4bc2Z7k1ig2fTJK
3gEbMz3y+YTlVNPo108HJI77DPbkBUqLPeF7PMaN/zDqmdH0yNCW+WiHZlf6h7kZ
dVH3feAhTfDZbpSxhpRoJa84OAVCNqAuNjnZs8pMIW/iRixwP8p84At7VsS4yQQF
TCjN22UhPP0PrqY3ngEj1lbfhHC1FNZvCMxrkUAUQbeYRqLrIwB4KdDMkRJixv5V
r89NO08QtnLwQduusVkc4Zg9HXtJTKjgQTHxHtn+OrTbpx0ogaUuYpVcQOsBT3b0
EyV2z6pZiH6HK1r5Xwaq0+nvFwpCHe58PlaI3Geihxejkv+85ZgDqXSGt7ECAwEA
AQKCAgBOY1DYlZYg8/eXAhuDDkayYYzDuny1ylG8c4F9nFYVCxB2GZ1Wz3icPWP1
j1BhpkBgPbPeLfM+O0V1H6eCdVvapKOxXM52mDuHO3TJP6P8lOZgZOOY6RUK7qp0
4mC4plqYx7oto23CBLoOdgMtM937rG0SLGDfIF6z8sI0XCMRkqPpRviNu5xxYYTk
IoczSwtmYcSZJRjHhk4AGnmicDbMPRlJ2k2E0euHhI9wMAyQFUFnhLJlQGALj6pj
DtYvcM1EAUN46EXK66bXQq8zgozYS0WIJ6+wOUKQMSIgUGCF6Rvm3ZTt9xwOxxW8
wxebvfYVTJgIdh2Nfusgmye9Debl73f+k9/O4RsvYc5J5w2n4IxKqQrfCZrZqevZ
s+KvARkuQbXrHPanvEd8MPrRZ6FOAdiZYAbB9OvzuKCbEkgag8GPjMMAvrjT49N2
qp9gwGgnzczQYn+vLblJuRzofcblvLE+sxKKDE8qrfcOjN1murZP7714y5E3NmEZ
NB2NTHveTflYI1HJ1tznI1C40GdBYH4GwT/0he53rBcjNaPhyP7j3cTR1doRfZap
2oz8KE/Sij3Zb6b8r7hi+Lcwpa9txZftro7XNOJIX7ZT5B4KMiXowtCHbkMMnL6k
48tRBpyX20MqDFezBRCK7lfGhU1Coms8UcDHoFXLuGY/sAYEcQKCAQEA9D9/PD1t
e90haG6nLl4LKP5wH2wB2BK1RRBERqOVqwSmdSgn3/+GkpeYWKdhN2jyYn6qnpJQ
hXXYGtHAAHuw0dGXnOmgcsyZSlAWPzpMYRYrSh3ds8JVJdV2d58yS0ty3Ae3W6aW
p4SRuhf8yIMgOmE+TARCU1rJdke9jIIl2TQmnpJahlsZeGLEmEXE99EhB5VoshRJ
hLXNn3xTtkQz3tNR0rMAtXI6SIMB00FFEG1+jClza6PYriT9dkORI5LSVqXDEpxR
C41PvYMKTAloWd0hZ2gdfwAcJScoAv75L10uR7O1IeQI+Et5h2tj4a/OfzILa0d5
BYMmVsTa3NZXLQKCAQEAxK3uJKmoN2uswJQSKpX4WziVBgbaZdpCNnAWhfJZYIcP
zlIuv9qOc/DIPiy9Sa9XtITSkcONAwRowtI783AtXeAelyo3W7A2aLIfBBZAXDzJ
8KMc9xMDPebvUhlPSzg4bNwvduukAnktlzCjrRWPXRoSfloSpFkFPP4GwTdVcf17
1mkki6rK4rbHmIoCITlZkNbUBCcu20ivK6N3pvH1wN123bxnF7lwvB5qizdFO5P7
xRVIoCdCXQ0+WK2ZokCa/r44rcp4ffgrLoO/WRlo4yERIa9NwaucIrXmotKX8kYc
YYpFzrGs72DljS7TBZCOqek5fNQBNK79BA2hNcJ1FQKCAQBC+M44hFdq6T1p1z18
F0lUGkBAPWtcBfUyVL2D6QL2+7Vw1mvonbYWp/6cAHlFqj8cBsNd65ysm51/7ReK
il/3iFLcMatPDw7RM5iGCcQ7ssp37iyGR7j1QMzVDA/MWYnLD0qVlN4mXNFgh4dG
q73AhD2CtoBBPtmS1yUATAd4wTX9sP+la4FWYy6o2iiiEvPNkog8nBd0ji0tl/eU
OKtIZAVBkteU6RdWHqX3eSQo1v0mDY+aajjVt0rQjMJVUMLgA1+z0KzgUAUXX8EJ
DGNSkLHCGuhLlIojHdN4ztUgyZoRCxOVkWNsQbW3Dhk7HuuuMNi0t8pVWpq+nAev
Gg6ZAoIBAQC0mMk9nRO7oAGG6/Aqbn8YtEISwKQ2Nk3qUs47vKdZPWvEFi6bOILp
70TP4qEFUh6EwhngguGuzZOsoQMvq+fcdXlhcQBYDtxHEpfsVspOZ/s+HWjxbuHh
K3bBuj/XYA5f12c2GXYGV2MHm0AQJOX5pYEpyGepxZxLvy5QqRCqlQnrfaxzGycl
OpTYepEuFM0rdDhGf/xEmt9OgNHT2AXDTRhizycS39Kmyn8myl+mL2JWPA7uEF6d
txVytCWImS45kE3XNz2g3go4sf04QV7QgIKMnb4Wgg/ix4i6JgokC0DwR9mFzBxx
ylW+aCqYx35YgrGo77sTt0LZP/KxvJdpAoIBAF7YfhR1wFbW2L8sJ4gAbjPUWOMu
JUfE4FhdLcSdqCo+N8uN0qawJxXltBKfjeeoH0CDh9Yv0qqalVdSOKS9BPAa1zJc
o2kBcT8AVwoPS5oxa9eDT+7iHPMF4BErB2IGv3yYwpjqSZBJ9TsTu1B6iTf5hOL5
9pqcv/LjfcwtWu2XMCVoZj2Q8iYv55l3jJ1ByF/UDVezWajE69avvJkQZrMZmuBw
UuHelP/7anRyyelh7RkndZpPCExGmuO7pd5aG25/mBs0i34R1PElAtt8AN36f5Tk
1GxIltTNtLk4Mivwp9aZ1vf9s5FAhgPDvfGV5yFoKYmA/65ZlrKx0zlFNng=
-----END RSA PRIVATE KEY-----

View File

@ -0,0 +1,87 @@
Bag Attributes
localKeyID: 93 0C 3E A7 82 62 36 37 5E 73 9B 05 C4 98 DF DC 04 5C B4 C9
subject=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd
issuer=/C=US/ST=CA/L=SF/O=HTTPie/CN=HTTPie
-----BEGIN CERTIFICATE-----
MIIFAjCCAuoCAQEwDQYJKoZIhvcNAQEFBQAwSTELMAkGA1UEBhMCVVMxCzAJBgNV
BAgTAkNBMQswCQYDVQQHEwJTRjEPMA0GA1UEChMGSFRUUGllMQ8wDQYDVQQDEwZI
VFRQaWUwHhcNMTUwMTIzMjIyNTM2WhcNMTYwMTIzMjIyNTM2WjBFMQswCQYDVQQG
EwJBVTETMBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lk
Z2l0cyBQdHkgTHRkMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu6aP
iR3TpPESWKTS969fxNRoSxl8P4osjhIaUuwblFNZc8/Rn5mCMKmD506JrFV8fktQ
M6JRL7QuDC9vCw0ycr2HCV1sYX/ICgPCXYgmyigH535lb9V9hHjAgy60QgJBgSE7
lmMYaPpX6OKbT7UlzSwYtfHomXEBFA18Rlc9GwMXH8Et0RQiWIi7S6vpDRpZFxRi
gtXMceK1X8kut2ODv9B5ZwiuXh7+AMSCUkO58bXJTewQI6JadczU0JyVVjJVTny3
ta0x4SyXn8/ibylOalIsmTd/CAXJRfhV0Umb34LwaWrZ2nc+OrJwYLvOp1cG/zYl
GHkFCViRfuwwSkL4iKjVeHx2o0DxJ4bc2Z7k1ig2fTJK3gEbMz3y+YTlVNPo108H
JI77DPbkBUqLPeF7PMaN/zDqmdH0yNCW+WiHZlf6h7kZdVH3feAhTfDZbpSxhpRo
Ja84OAVCNqAuNjnZs8pMIW/iRixwP8p84At7VsS4yQQFTCjN22UhPP0PrqY3ngEj
1lbfhHC1FNZvCMxrkUAUQbeYRqLrIwB4KdDMkRJixv5Vr89NO08QtnLwQduusVkc
4Zg9HXtJTKjgQTHxHtn+OrTbpx0ogaUuYpVcQOsBT3b0EyV2z6pZiH6HK1r5Xwaq
0+nvFwpCHe58PlaI3Geihxejkv+85ZgDqXSGt7ECAwEAATANBgkqhkiG9w0BAQUF
AAOCAgEAQgIicN/uWtaYYBVEVeMGMdxzpp2pv3AaCfQMoVGaQu9VLydK/GBlYOqj
AGPjdmQ7p4ISlduXqslu646+RxZ+H6TSSj0NTF4FyR8LPckRPiePNlsGp3u6ffix
PX0554Ks+JYyFJ7qyMhsilqCYtw8prX9lj8fjzbWWXlgJFH/SRZw4xdcJ1yYA9sQ
fBHxveCWFS1ibX5+QGy/+7jPb99MP38HEIt9vTMW5aiwXeIbipXohWqcJhxL9GXz
KPsrt9a++rLjqsquhZL4uCksGmI4Gv0FQQswgSyHSSQzagee5VRB68WYSAyYdvzi
YCfkNcbQtOOQWGx4rsEdENViPs1GEZkWJJ1h9pmWzZl0U9c3cnABffK7o9v6ap2F
NrnU5H/7jLuBiUJFzqwkgAjANLRZ6hLj6h/grcnIIThJwg6KaXvpEh4UkHuqHYBF
Fq1BWZIWU25ASggEVIsCPXC2+I1oGhxK1DN/J+wIht9MBWWlQWVMZAQsBkszNZrh
nzdfMoQZTG5bT4Bf0bI5LmPaY0xBxXA1f4TLuqrEAziOjRX3vIQV4i33nZZJvPcC
mCoyhAUpTJm+OI90ePll+vBO1ENAx7EMHqNe6eCChZ/9DUsVxxtaorVq1l0xWons
ynOCgx46hGE12/oiRIKq/wGMpv6ClfJhW1N5nJahDqoIMEvnNaQ=
-----END CERTIFICATE-----
Bag Attributes
localKeyID: 93 0C 3E A7 82 62 36 37 5E 73 9B 05 C4 98 DF DC 04 5C B4 C9
Key Attributes: <No Attributes>
-----BEGIN RSA PRIVATE KEY-----
MIIJKAIBAAKCAgEAu6aPiR3TpPESWKTS969fxNRoSxl8P4osjhIaUuwblFNZc8/R
n5mCMKmD506JrFV8fktQM6JRL7QuDC9vCw0ycr2HCV1sYX/ICgPCXYgmyigH535l
b9V9hHjAgy60QgJBgSE7lmMYaPpX6OKbT7UlzSwYtfHomXEBFA18Rlc9GwMXH8Et
0RQiWIi7S6vpDRpZFxRigtXMceK1X8kut2ODv9B5ZwiuXh7+AMSCUkO58bXJTewQ
I6JadczU0JyVVjJVTny3ta0x4SyXn8/ibylOalIsmTd/CAXJRfhV0Umb34LwaWrZ
2nc+OrJwYLvOp1cG/zYlGHkFCViRfuwwSkL4iKjVeHx2o0DxJ4bc2Z7k1ig2fTJK
3gEbMz3y+YTlVNPo108HJI77DPbkBUqLPeF7PMaN/zDqmdH0yNCW+WiHZlf6h7kZ
dVH3feAhTfDZbpSxhpRoJa84OAVCNqAuNjnZs8pMIW/iRixwP8p84At7VsS4yQQF
TCjN22UhPP0PrqY3ngEj1lbfhHC1FNZvCMxrkUAUQbeYRqLrIwB4KdDMkRJixv5V
r89NO08QtnLwQduusVkc4Zg9HXtJTKjgQTHxHtn+OrTbpx0ogaUuYpVcQOsBT3b0
EyV2z6pZiH6HK1r5Xwaq0+nvFwpCHe58PlaI3Geihxejkv+85ZgDqXSGt7ECAwEA
AQKCAgBOY1DYlZYg8/eXAhuDDkayYYzDuny1ylG8c4F9nFYVCxB2GZ1Wz3icPWP1
j1BhpkBgPbPeLfM+O0V1H6eCdVvapKOxXM52mDuHO3TJP6P8lOZgZOOY6RUK7qp0
4mC4plqYx7oto23CBLoOdgMtM937rG0SLGDfIF6z8sI0XCMRkqPpRviNu5xxYYTk
IoczSwtmYcSZJRjHhk4AGnmicDbMPRlJ2k2E0euHhI9wMAyQFUFnhLJlQGALj6pj
DtYvcM1EAUN46EXK66bXQq8zgozYS0WIJ6+wOUKQMSIgUGCF6Rvm3ZTt9xwOxxW8
wxebvfYVTJgIdh2Nfusgmye9Debl73f+k9/O4RsvYc5J5w2n4IxKqQrfCZrZqevZ
s+KvARkuQbXrHPanvEd8MPrRZ6FOAdiZYAbB9OvzuKCbEkgag8GPjMMAvrjT49N2
qp9gwGgnzczQYn+vLblJuRzofcblvLE+sxKKDE8qrfcOjN1murZP7714y5E3NmEZ
NB2NTHveTflYI1HJ1tznI1C40GdBYH4GwT/0he53rBcjNaPhyP7j3cTR1doRfZap
2oz8KE/Sij3Zb6b8r7hi+Lcwpa9txZftro7XNOJIX7ZT5B4KMiXowtCHbkMMnL6k
48tRBpyX20MqDFezBRCK7lfGhU1Coms8UcDHoFXLuGY/sAYEcQKCAQEA9D9/PD1t
e90haG6nLl4LKP5wH2wB2BK1RRBERqOVqwSmdSgn3/+GkpeYWKdhN2jyYn6qnpJQ
hXXYGtHAAHuw0dGXnOmgcsyZSlAWPzpMYRYrSh3ds8JVJdV2d58yS0ty3Ae3W6aW
p4SRuhf8yIMgOmE+TARCU1rJdke9jIIl2TQmnpJahlsZeGLEmEXE99EhB5VoshRJ
hLXNn3xTtkQz3tNR0rMAtXI6SIMB00FFEG1+jClza6PYriT9dkORI5LSVqXDEpxR
C41PvYMKTAloWd0hZ2gdfwAcJScoAv75L10uR7O1IeQI+Et5h2tj4a/OfzILa0d5
BYMmVsTa3NZXLQKCAQEAxK3uJKmoN2uswJQSKpX4WziVBgbaZdpCNnAWhfJZYIcP
zlIuv9qOc/DIPiy9Sa9XtITSkcONAwRowtI783AtXeAelyo3W7A2aLIfBBZAXDzJ
8KMc9xMDPebvUhlPSzg4bNwvduukAnktlzCjrRWPXRoSfloSpFkFPP4GwTdVcf17
1mkki6rK4rbHmIoCITlZkNbUBCcu20ivK6N3pvH1wN123bxnF7lwvB5qizdFO5P7
xRVIoCdCXQ0+WK2ZokCa/r44rcp4ffgrLoO/WRlo4yERIa9NwaucIrXmotKX8kYc
YYpFzrGs72DljS7TBZCOqek5fNQBNK79BA2hNcJ1FQKCAQBC+M44hFdq6T1p1z18
F0lUGkBAPWtcBfUyVL2D6QL2+7Vw1mvonbYWp/6cAHlFqj8cBsNd65ysm51/7ReK
il/3iFLcMatPDw7RM5iGCcQ7ssp37iyGR7j1QMzVDA/MWYnLD0qVlN4mXNFgh4dG
q73AhD2CtoBBPtmS1yUATAd4wTX9sP+la4FWYy6o2iiiEvPNkog8nBd0ji0tl/eU
OKtIZAVBkteU6RdWHqX3eSQo1v0mDY+aajjVt0rQjMJVUMLgA1+z0KzgUAUXX8EJ
DGNSkLHCGuhLlIojHdN4ztUgyZoRCxOVkWNsQbW3Dhk7HuuuMNi0t8pVWpq+nAev
Gg6ZAoIBAQC0mMk9nRO7oAGG6/Aqbn8YtEISwKQ2Nk3qUs47vKdZPWvEFi6bOILp
70TP4qEFUh6EwhngguGuzZOsoQMvq+fcdXlhcQBYDtxHEpfsVspOZ/s+HWjxbuHh
K3bBuj/XYA5f12c2GXYGV2MHm0AQJOX5pYEpyGepxZxLvy5QqRCqlQnrfaxzGycl
OpTYepEuFM0rdDhGf/xEmt9OgNHT2AXDTRhizycS39Kmyn8myl+mL2JWPA7uEF6d
txVytCWImS45kE3XNz2g3go4sf04QV7QgIKMnb4Wgg/ix4i6JgokC0DwR9mFzBxx
ylW+aCqYx35YgrGo77sTt0LZP/KxvJdpAoIBAF7YfhR1wFbW2L8sJ4gAbjPUWOMu
JUfE4FhdLcSdqCo+N8uN0qawJxXltBKfjeeoH0CDh9Yv0qqalVdSOKS9BPAa1zJc
o2kBcT8AVwoPS5oxa9eDT+7iHPMF4BErB2IGv3yYwpjqSZBJ9TsTu1B6iTf5hOL5
9pqcv/LjfcwtWu2XMCVoZj2Q8iYv55l3jJ1ByF/UDVezWajE69avvJkQZrMZmuBw
UuHelP/7anRyyelh7RkndZpPCExGmuO7pd5aG25/mBs0i34R1PElAtt8AN36f5Tk
1GxIltTNtLk4Mivwp9aZ1vf9s5FAhgPDvfGV5yFoKYmA/65ZlrKx0zlFNng=
-----END RSA PRIVATE KEY-----

41
tests/fixtures.py Normal file
View File

@ -0,0 +1,41 @@
"""Test data"""
from os import path
import codecs
def patharg(path):
"""
Back slashes need to be escaped in ITEM args,
even in Windows paths.
"""
return path.replace('\\', '\\\\\\')
FIXTURES_ROOT = path.join(path.abspath(path.dirname(__file__)), 'fixtures')
FILE_PATH = path.join(FIXTURES_ROOT, 'test.txt')
JSON_FILE_PATH = path.join(FIXTURES_ROOT, 'test.json')
BIN_FILE_PATH = path.join(FIXTURES_ROOT, 'test.bin')
FILE_PATH_ARG = patharg(FILE_PATH)
BIN_FILE_PATH_ARG = patharg(BIN_FILE_PATH)
JSON_FILE_PATH_ARG = patharg(JSON_FILE_PATH)
with codecs.open(FILE_PATH, encoding='utf8') as f:
# Strip because we don't want new lines in the data so that we can
# easily count occurrences also when embedded in JSON (where the new
# line would be escaped).
FILE_CONTENT = f.read().strip()
with codecs.open(JSON_FILE_PATH, encoding='utf8') as f:
JSON_FILE_CONTENT = f.read()
with open(BIN_FILE_PATH, 'rb') as f:
BIN_FILE_CONTENT = f.read()
UNICODE = FILE_CONTENT

View File

@ -1 +0,0 @@
__test_file_content__

View File

@ -1 +0,0 @@
__test_file_content__

View File

Before

Width:  |  Height:  |  Size: 1.1 KiB

After

Width:  |  Height:  |  Size: 1.1 KiB

4
tests/fixtures/test.json vendored Normal file
View File

@ -0,0 +1,4 @@
{
"name": "Jakub Roztočil",
"unicode": "χρυσαφὶ 太陽 เลิศ ♜♞♝♛♚♝♞♜ оживлённым तान्यहानि 有朋"
}

1
tests/fixtures/test.txt vendored Normal file
View File

@ -0,0 +1 @@
[one line of UTF8-encoded unicode text] χρυσαφὶ 太陽 เลิศ ♜♞♝♛♚♝♞♜ оживлённым तान्यहानि 有朋 ஸ்றீனிவாஸ ٱلرَّحْمـَبنِ

62
tests/test_auth.py Normal file
View File

@ -0,0 +1,62 @@
"""HTTP authentication-related tests."""
import requests
import pytest
from utils import http, add_auth, HTTP_OK, TestEnvironment
import httpie.input
import httpie.cli
class TestAuth:
def test_basic_auth(self, httpbin):
r = http('--auth=user:password',
'GET', httpbin.url + '/basic-auth/user/password')
assert HTTP_OK in r
assert r.json == {'authenticated': True, 'user': 'user'}
@pytest.mark.skipif(
requests.__version__ == '0.13.6',
reason='Redirects with prefetch=False are broken in Requests 0.13.6')
def test_digest_auth(self, httpbin):
r = http('--auth-type=digest', '--auth=user:password',
'GET', httpbin.url + '/digest-auth/auth/user/password')
assert HTTP_OK in r
assert r.json == {'authenticated': True, 'user': 'user'}
def test_password_prompt(self, httpbin):
httpie.input.AuthCredentials._getpass = lambda self, prompt: 'password'
r = http('--auth', 'user',
'GET', httpbin.url + '/basic-auth/user/password')
assert HTTP_OK in r
assert r.json == {'authenticated': True, 'user': 'user'}
def test_credentials_in_url(self, httpbin):
url = add_auth(httpbin.url + '/basic-auth/user/password',
auth='user:password')
r = http('GET', url)
assert HTTP_OK in r
assert r.json == {'authenticated': True, 'user': 'user'}
def test_credentials_in_url_auth_flag_has_priority(self, httpbin):
"""When credentials are passed in URL and via -a at the same time,
then the ones from -a are used."""
url = add_auth(httpbin.url + '/basic-auth/user/password',
auth='user:wrong')
r = http('--auth=user:password', 'GET', url)
assert HTTP_OK in r
assert r.json == {'authenticated': True, 'user': 'user'}
@pytest.mark.parametrize('url', [
'username@example.org',
'username:@example.org',
])
def test_only_username_in_url(self, url):
"""
https://github.com/jakubroztocil/httpie/issues/242
"""
args = httpie.cli.parser.parse_args(args=[url], env=TestEnvironment())
assert args.auth
assert args.auth.key == 'username'
assert args.auth.value == ''

54
tests/test_binary.py Normal file
View File

@ -0,0 +1,54 @@
"""Tests for dealing with binary request and response data."""
from httpie.compat import urlopen
from httpie.output.streams import BINARY_SUPPRESSED_NOTICE
from utils import TestEnvironment, http
from fixtures import BIN_FILE_PATH, BIN_FILE_CONTENT, BIN_FILE_PATH_ARG
class TestBinaryRequestData:
def test_binary_stdin(self, httpbin):
with open(BIN_FILE_PATH, 'rb') as stdin:
env = TestEnvironment(
stdin=stdin,
stdin_isatty=False,
stdout_isatty=False
)
r = http('--print=B', 'POST', httpbin.url + '/post', env=env)
assert r == BIN_FILE_CONTENT
def test_binary_file_path(self, httpbin):
env = TestEnvironment(stdin_isatty=True, stdout_isatty=False)
r = http('--print=B', 'POST', httpbin.url + '/post',
'@' + BIN_FILE_PATH_ARG, env=env, )
assert r == BIN_FILE_CONTENT
def test_binary_file_form(self, httpbin):
env = TestEnvironment(stdin_isatty=True, stdout_isatty=False)
r = http('--print=B', '--form', 'POST', httpbin.url + '/post',
'test@' + BIN_FILE_PATH_ARG, env=env)
assert bytes(BIN_FILE_CONTENT) in bytes(r)
class TestBinaryResponseData:
url = 'http://www.google.com/favicon.ico'
@property
def bindata(self):
if not hasattr(self, '_bindata'):
self._bindata = urlopen(self.url).read()
return self._bindata
def test_binary_suppresses_when_terminal(self):
r = http('GET', self.url)
assert BINARY_SUPPRESSED_NOTICE.decode() in r
def test_binary_suppresses_when_not_terminal_but_pretty(self):
env = TestEnvironment(stdin_isatty=True, stdout_isatty=False)
r = http('--pretty=all', 'GET', self.url,
env=env)
assert BINARY_SUPPRESSED_NOTICE.decode() in r
def test_binary_included_and_correct_when_suitable(self):
env = TestEnvironment(stdin_isatty=True, stdout_isatty=False)
r = http('GET', self.url, env=env)
assert r == self.bindata

323
tests/test_cli.py Normal file
View File

@ -0,0 +1,323 @@
"""CLI argument parsing related tests."""
import json
# noinspection PyCompatibility
import argparse
import os
import pytest
from httpie import input
from httpie.input import KeyValue, KeyValueArgType, DataDict
from httpie import ExitStatus
from httpie.cli import parser
from utils import TestEnvironment, http, HTTP_OK
from fixtures import (
FILE_PATH_ARG, JSON_FILE_PATH_ARG,
JSON_FILE_CONTENT, FILE_CONTENT, FILE_PATH
)
class TestItemParsing:
key_value = KeyValueArgType(*input.SEP_GROUP_ALL_ITEMS)
def test_invalid_items(self):
items = ['no-separator']
for item in items:
pytest.raises(argparse.ArgumentTypeError, self.key_value, item)
def test_escape_separator(self):
items = input.parse_items([
# headers
self.key_value(r'foo\:bar:baz'),
self.key_value(r'jack\@jill:hill'),
# data
self.key_value(r'baz\=bar=foo'),
# files
self.key_value(r'bar\@baz@%s' % FILE_PATH_ARG),
])
# `requests.structures.CaseInsensitiveDict` => `dict`
headers = dict(items.headers._store.values())
assert headers == {
'foo:bar': 'baz',
'jack@jill': 'hill',
}
assert items.data == {'baz=bar': 'foo'}
assert 'bar@baz' in items.files
@pytest.mark.parametrize(('string', 'key', 'sep', 'value'), [
('path=c:\windows', 'path', '=', 'c:\windows'),
('path=c:\windows\\', 'path', '=', 'c:\windows\\'),
('path\==c:\windows', 'path=', '=', 'c:\windows'),
])
def test_backslash_before_non_special_character_does_not_escape(
self, string, key, sep, value):
expected = KeyValue(orig=string, key=key, sep=sep, value=value)
actual = self.key_value(string)
assert actual == expected
def test_escape_longsep(self):
items = input.parse_items([
self.key_value(r'bob\:==foo'),
])
assert items.params == {'bob:': 'foo'}
def test_valid_items(self):
items = input.parse_items([
self.key_value('string=value'),
self.key_value('header:value'),
self.key_value('list:=["a", 1, {}, false]'),
self.key_value('obj:={"a": "b"}'),
self.key_value('eh:'),
self.key_value('ed='),
self.key_value('bool:=true'),
self.key_value('file@' + FILE_PATH_ARG),
self.key_value('query==value'),
self.key_value('string-embed=@' + FILE_PATH_ARG),
self.key_value('raw-json-embed:=@' + JSON_FILE_PATH_ARG),
])
# Parsed headers
# `requests.structures.CaseInsensitiveDict` => `dict`
headers = dict(items.headers._store.values())
assert headers == {'header': 'value', 'eh': ''}
# Parsed data
raw_json_embed = items.data.pop('raw-json-embed')
assert raw_json_embed == json.loads(JSON_FILE_CONTENT)
items.data['string-embed'] = items.data['string-embed'].strip()
assert dict(items.data) == {
"ed": "",
"string": "value",
"bool": True,
"list": ["a", 1, {}, False],
"obj": {"a": "b"},
"string-embed": FILE_CONTENT,
}
# Parsed query string parameters
assert items.params == {'query': 'value'}
# Parsed file fields
assert 'file' in items.files
assert (items.files['file'][1].read().strip().decode('utf8')
== FILE_CONTENT)
def test_multiple_file_fields_with_same_field_name(self):
items = input.parse_items([
self.key_value('file_field@' + FILE_PATH_ARG),
self.key_value('file_field@' + FILE_PATH_ARG),
])
assert len(items.files['file_field']) == 2
def test_multiple_text_fields_with_same_field_name(self):
items = input.parse_items(
[self.key_value('text_field=a'),
self.key_value('text_field=b')],
data_class=DataDict
)
assert items.data['text_field'] == ['a', 'b']
assert list(items.data.items()) == [
('text_field', 'a'),
('text_field', 'b'),
]
class TestQuerystring:
def test_query_string_params_in_url(self, httpbin):
r = http('--print=Hhb', 'GET', httpbin.url + '/get?a=1&b=2')
path = '/get?a=1&b=2'
url = httpbin.url + path
assert HTTP_OK in r
assert 'GET %s HTTP/1.1' % path in r
assert '"url": "%s"' % url in r
def test_query_string_params_items(self, httpbin):
r = http('--print=Hhb', 'GET', httpbin.url + '/get', 'a==1')
path = '/get?a=1'
url = httpbin.url + path
assert HTTP_OK in r
assert 'GET %s HTTP/1.1' % path in r
assert '"url": "%s"' % url in r
def test_query_string_params_in_url_and_items_with_duplicates(self,
httpbin):
r = http('--print=Hhb', 'GET',
httpbin.url + '/get?a=1&a=1', 'a==1', 'a==1')
path = '/get?a=1&a=1&a=1&a=1'
url = httpbin.url + path
assert HTTP_OK in r
assert 'GET %s HTTP/1.1' % path in r
assert '"url": "%s"' % url in r
class TestURLshorthand:
def test_expand_localhost_shorthand(self):
args = parser.parse_args(args=[':'], env=TestEnvironment())
assert args.url == 'http://localhost'
def test_expand_localhost_shorthand_with_slash(self):
args = parser.parse_args(args=[':/'], env=TestEnvironment())
assert args.url == 'http://localhost/'
def test_expand_localhost_shorthand_with_port(self):
args = parser.parse_args(args=[':3000'], env=TestEnvironment())
assert args.url == 'http://localhost:3000'
def test_expand_localhost_shorthand_with_path(self):
args = parser.parse_args(args=[':/path'], env=TestEnvironment())
assert args.url == 'http://localhost/path'
def test_expand_localhost_shorthand_with_port_and_slash(self):
args = parser.parse_args(args=[':3000/'], env=TestEnvironment())
assert args.url == 'http://localhost:3000/'
def test_expand_localhost_shorthand_with_port_and_path(self):
args = parser.parse_args(args=[':3000/path'], env=TestEnvironment())
assert args.url == 'http://localhost:3000/path'
def test_dont_expand_shorthand_ipv6_as_shorthand(self):
args = parser.parse_args(args=['::1'], env=TestEnvironment())
assert args.url == 'http://::1'
def test_dont_expand_longer_ipv6_as_shorthand(self):
args = parser.parse_args(
args=['::ffff:c000:0280'],
env=TestEnvironment()
)
assert args.url == 'http://::ffff:c000:0280'
def test_dont_expand_full_ipv6_as_shorthand(self):
args = parser.parse_args(
args=['0000:0000:0000:0000:0000:0000:0000:0001'],
env=TestEnvironment()
)
assert args.url == 'http://0000:0000:0000:0000:0000:0000:0000:0001'
class TestArgumentParser:
def setup_method(self, method):
self.parser = input.Parser()
def test_guess_when_method_set_and_valid(self):
self.parser.args = argparse.Namespace()
self.parser.args.method = 'GET'
self.parser.args.url = 'http://example.com/'
self.parser.args.items = []
self.parser.args.ignore_stdin = False
self.parser.env = TestEnvironment()
self.parser._guess_method()
assert self.parser.args.method == 'GET'
assert self.parser.args.url == 'http://example.com/'
assert self.parser.args.items == []
def test_guess_when_method_not_set(self):
self.parser.args = argparse.Namespace()
self.parser.args.method = None
self.parser.args.url = 'http://example.com/'
self.parser.args.items = []
self.parser.args.ignore_stdin = False
self.parser.env = TestEnvironment()
self.parser._guess_method()
assert self.parser.args.method == 'GET'
assert self.parser.args.url == 'http://example.com/'
assert self.parser.args.items == []
def test_guess_when_method_set_but_invalid_and_data_field(self):
self.parser.args = argparse.Namespace()
self.parser.args.method = 'http://example.com/'
self.parser.args.url = 'data=field'
self.parser.args.items = []
self.parser.args.ignore_stdin = False
self.parser.env = TestEnvironment()
self.parser._guess_method()
assert self.parser.args.method == 'POST'
assert self.parser.args.url == 'http://example.com/'
assert self.parser.args.items == [
KeyValue(key='data',
value='field',
sep='=',
orig='data=field')
]
def test_guess_when_method_set_but_invalid_and_header_field(self):
self.parser.args = argparse.Namespace()
self.parser.args.method = 'http://example.com/'
self.parser.args.url = 'test:header'
self.parser.args.items = []
self.parser.args.ignore_stdin = False
self.parser.env = TestEnvironment()
self.parser._guess_method()
assert self.parser.args.method == 'GET'
assert self.parser.args.url == 'http://example.com/'
assert self.parser.args.items, [
KeyValue(key='test',
value='header',
sep=':',
orig='test:header')
]
def test_guess_when_method_set_but_invalid_and_item_exists(self):
self.parser.args = argparse.Namespace()
self.parser.args.method = 'http://example.com/'
self.parser.args.url = 'new_item=a'
self.parser.args.items = [
KeyValue(
key='old_item', value='b', sep='=', orig='old_item=b')
]
self.parser.args.ignore_stdin = False
self.parser.env = TestEnvironment()
self.parser._guess_method()
assert self.parser.args.items, [
KeyValue(key='new_item', value='a', sep='=', orig='new_item=a'),
KeyValue(
key='old_item', value='b', sep='=', orig='old_item=b'),
]
class TestNoOptions:
def test_valid_no_options(self, httpbin):
r = http('--verbose', '--no-verbose', 'GET', httpbin.url + '/get')
assert 'GET /get HTTP/1.1' not in r
def test_invalid_no_options(self, httpbin):
r = http('--no-war', 'GET', httpbin.url + '/get',
error_exit_ok=True)
assert r.exit_status == 1
assert 'unrecognized arguments: --no-war' in r.stderr
assert 'GET /get HTTP/1.1' not in r
class TestIgnoreStdin:
def test_ignore_stdin(self, httpbin):
with open(FILE_PATH) as f:
env = TestEnvironment(stdin=f, stdin_isatty=False)
r = http('--ignore-stdin', '--verbose', httpbin.url + '/get',
env=env)
assert HTTP_OK in r
assert 'GET /get HTTP' in r, "Don't default to POST."
assert FILE_CONTENT not in r, "Don't send stdin data."
def test_ignore_stdin_cannot_prompt_password(self, httpbin):
r = http('--ignore-stdin', '--auth=no-password', httpbin.url + '/get',
error_exit_ok=True)
assert r.exit_status == ExitStatus.ERROR
assert 'because --ignore-stdin' in r.stderr

106
tests/test_defaults.py Normal file
View File

@ -0,0 +1,106 @@
"""
Tests for the provided defaults regarding HTTP method, and --json vs. --form.
"""
from utils import TestEnvironment, http, HTTP_OK, no_content_type
from fixtures import FILE_PATH
class TestImplicitHTTPMethod:
def test_implicit_GET(self, httpbin):
r = http(httpbin.url + '/get')
assert HTTP_OK in r
def test_implicit_GET_with_headers(self, httpbin):
r = http(httpbin.url + '/headers', 'Foo:bar')
assert HTTP_OK in r
assert r.json['headers']['Foo'] == 'bar'
def test_implicit_POST_json(self, httpbin):
r = http(httpbin.url + '/post', 'hello=world')
assert HTTP_OK in r
assert r.json['json'] == {'hello': 'world'}
def test_implicit_POST_form(self, httpbin):
r = http('--form', httpbin.url + '/post', 'foo=bar')
assert HTTP_OK in r
assert r.json['form'] == {'foo': 'bar'}
def test_implicit_POST_stdin(self, httpbin):
with open(FILE_PATH) as f:
env = TestEnvironment(stdin_isatty=False, stdin=f)
r = http('--form', httpbin.url + '/post', env=env)
assert HTTP_OK in r
class TestAutoContentTypeAndAcceptHeaders:
"""
Test that Accept and Content-Type correctly defaults to JSON,
but can still be overridden. The same with Content-Type when --form
-f is used.
"""
def test_GET_no_data_no_auto_headers(self, httpbin):
# https://github.com/jakubroztocil/httpie/issues/62
r = http('GET', httpbin.url + '/headers')
assert HTTP_OK in r
assert r.json['headers']['Accept'] == '*/*'
assert no_content_type(r.json['headers'])
def test_POST_no_data_no_auto_headers(self, httpbin):
# JSON headers shouldn't be automatically set for POST with no data.
r = http('POST', httpbin.url + '/post')
assert HTTP_OK in r
assert '"Accept": "*/*"' in r
assert '"Content-Type": "application/json' not in r
def test_POST_with_data_auto_JSON_headers(self, httpbin):
r = http('POST', httpbin.url + '/post', 'a=b')
assert HTTP_OK in r
assert '"Accept": "application/json"' in r
assert '"Content-Type": "application/json; charset=utf-8' in r
def test_GET_with_data_auto_JSON_headers(self, httpbin):
# JSON headers should automatically be set also for GET with data.
r = http('POST', httpbin.url + '/post', 'a=b')
assert HTTP_OK in r
assert '"Accept": "application/json"' in r, r
assert '"Content-Type": "application/json; charset=utf-8' in r
def test_POST_explicit_JSON_auto_JSON_accept(self, httpbin):
r = http('--json', 'POST', httpbin.url + '/post')
assert HTTP_OK in r
assert r.json['headers']['Accept'] == 'application/json'
# Make sure Content-Type gets set even with no data.
# https://github.com/jakubroztocil/httpie/issues/137
assert 'application/json' in r.json['headers']['Content-Type']
def test_GET_explicit_JSON_explicit_headers(self, httpbin):
r = http('--json', 'GET', httpbin.url + '/headers',
'Accept:application/xml',
'Content-Type:application/xml')
assert HTTP_OK in r
assert '"Accept": "application/xml"' in r
assert '"Content-Type": "application/xml"' in r
def test_POST_form_auto_Content_Type(self, httpbin):
r = http('--form', 'POST', httpbin.url + '/post')
assert HTTP_OK in r
assert '"Content-Type": "application/x-www-form-urlencoded' in r
def test_POST_form_Content_Type_override(self, httpbin):
r = http('--form', 'POST', httpbin.url + '/post',
'Content-Type:application/xml')
assert HTTP_OK in r
assert '"Content-Type": "application/xml"' in r
def test_print_only_body_when_stdout_redirected_by_default(self, httpbin):
env = TestEnvironment(stdin_isatty=True, stdout_isatty=False)
r = http('GET', httpbin.url + '/get', env=env)
assert 'HTTP/' not in r
def test_print_overridable_when_stdout_redirected(self, httpbin):
env = TestEnvironment(stdin_isatty=True, stdout_isatty=False)
r = http('--print=h', 'GET', httpbin.url + '/get', env=env)
assert HTTP_OK in r

39
tests/test_docs.py Normal file
View File

@ -0,0 +1,39 @@
import os
import fnmatch
import subprocess
import pytest
from utils import TESTS_ROOT
def has_docutils():
try:
#noinspection PyUnresolvedReferences
import docutils
return True
except ImportError:
return False
def rst_filenames():
for root, dirnames, filenames in os.walk(os.path.dirname(TESTS_ROOT)):
if '.tox' not in root:
for filename in fnmatch.filter(filenames, '*.rst'):
yield os.path.join(root, filename)
filenames = list(rst_filenames())
assert filenames
@pytest.mark.skipif(not has_docutils(), reason='docutils not installed')
@pytest.mark.parametrize('filename', filenames)
def test_rst_file_syntax(filename):
p = subprocess.Popen(
['rst2pseudoxml.py', '--report=1', '--exit-status=1', filename],
stderr=subprocess.PIPE,
stdout=subprocess.PIPE
)
err = p.communicate()[1]
assert p.returncode == 0, err

139
tests/test_downloads.py Normal file
View File

@ -0,0 +1,139 @@
import os
import time
import pytest
from requests.structures import CaseInsensitiveDict
from httpie.compat import urlopen
from httpie.downloads import (
parse_content_range, filename_from_content_disposition, filename_from_url,
get_unique_filename, ContentRangeError, Download,
)
from utils import http, TestEnvironment
class Response(object):
# noinspection PyDefaultArgument
def __init__(self, url, headers={}, status_code=200):
self.url = url
self.headers = CaseInsensitiveDict(headers)
self.status_code = status_code
class TestDownloadUtils:
def test_Content_Range_parsing(self):
parse = parse_content_range
assert parse('bytes 100-199/200', 100) == 200
assert parse('bytes 100-199/*', 100) == 200
# missing
pytest.raises(ContentRangeError, parse, None, 100)
# syntax error
pytest.raises(ContentRangeError, parse, 'beers 100-199/*', 100)
# unexpected range
pytest.raises(ContentRangeError, parse, 'bytes 100-199/*', 99)
# invalid instance-length
pytest.raises(ContentRangeError, parse, 'bytes 100-199/199', 100)
# invalid byte-range-resp-spec
pytest.raises(ContentRangeError, parse, 'bytes 100-99/199', 100)
# invalid byte-range-resp-spec
pytest.raises(ContentRangeError, parse, 'bytes 100-100/*', 100)
@pytest.mark.parametrize('header, expected_filename', [
('attachment; filename=hello-WORLD_123.txt', 'hello-WORLD_123.txt'),
('attachment; filename=".hello-WORLD_123.txt"', 'hello-WORLD_123.txt'),
('attachment; filename="white space.txt"', 'white space.txt'),
(r'attachment; filename="\"quotes\".txt"', '"quotes".txt'),
('attachment; filename=/etc/hosts', 'hosts'),
('attachment; filename=', None)
])
def test_Content_Disposition_parsing(self, header, expected_filename):
assert filename_from_content_disposition(header) == expected_filename
def test_filename_from_url(self):
assert 'foo.txt' == filename_from_url(
url='http://example.org/foo',
content_type='text/plain'
)
assert 'foo.html' == filename_from_url(
url='http://example.org/foo',
content_type='text/html; charset=utf8'
)
assert 'foo' == filename_from_url(
url='http://example.org/foo',
content_type=None
)
assert 'foo' == filename_from_url(
url='http://example.org/foo',
content_type='x-foo/bar'
)
def test_unique_filename(self):
def attempts(unique_on_attempt=0):
# noinspection PyUnresolvedReferences,PyUnusedLocal
def exists(filename):
if exists.attempt == unique_on_attempt:
return False
exists.attempt += 1
return True
exists.attempt = 0
return exists
assert 'foo.bar' == get_unique_filename('foo.bar', attempts(0))
assert 'foo.bar-1' == get_unique_filename('foo.bar', attempts(1))
assert 'foo.bar-10' == get_unique_filename('foo.bar', attempts(10))
class TestDownloads:
# TODO: more tests
def test_actual_download(self, httpbin):
url = httpbin.url + '/robots.txt'
body = urlopen(url).read().decode()
env = TestEnvironment(stdin_isatty=True, stdout_isatty=False)
r = http('--download', url, env=env)
assert 'Downloading' in r.stderr
assert '[K' in r.stderr
assert 'Done' in r.stderr
assert body == r
def test_download_with_Content_Length(self, httpbin):
devnull = open(os.devnull, 'w')
download = Download(output_file=devnull, progress_file=devnull)
download.start(Response(
url=httpbin.url + '/',
headers={'Content-Length': 10}
))
time.sleep(1.1)
download.chunk_downloaded(b'12345')
time.sleep(1.1)
download.chunk_downloaded(b'12345')
download.finish()
assert not download.interrupted
def test_download_no_Content_Length(self, httpbin):
devnull = open(os.devnull, 'w')
download = Download(output_file=devnull, progress_file=devnull)
download.start(Response(url=httpbin.url + '/'))
time.sleep(1.1)
download.chunk_downloaded(b'12345')
download.finish()
assert not download.interrupted
def test_download_interrupted(self, httpbin):
devnull = open(os.devnull, 'w')
download = Download(output_file=devnull, progress_file=devnull)
download.start(Response(
url=httpbin.url + '/',
headers={'Content-Length': 5}
))
download.chunk_downloaded(b'1234')
download.finish()
assert download.interrupted

63
tests/test_exit_status.py Normal file
View File

@ -0,0 +1,63 @@
import requests
import pytest
from httpie import ExitStatus
from utils import TestEnvironment, http, HTTP_OK
class TestExitStatus:
def test_ok_response_exits_0(self, httpbin):
r = http('GET', httpbin.url + '/status/200')
assert HTTP_OK in r
assert r.exit_status == ExitStatus.OK
def test_error_response_exits_0_without_check_status(self, httpbin):
r = http('GET', httpbin.url + '/status/500')
assert '500 INTERNAL SERVER ERRO' in r
assert r.exit_status == ExitStatus.OK
assert not r.stderr
@pytest.mark.skipif(
tuple(map(int, requests.__version__.split('.'))) < (2, 3, 0),
reason='timeout broken in requests prior v2.3.0 (#185)'
)
def test_timeout_exit_status(self, httpbin):
r = http('--timeout=0.5', 'GET', httpbin.url + '/delay/1',
error_exit_ok=True)
assert r.exit_status == ExitStatus.ERROR_TIMEOUT
def test_3xx_check_status_exits_3_and_stderr_when_stdout_redirected(
self, httpbin):
env = TestEnvironment(stdout_isatty=False)
r = http('--check-status', '--headers',
'GET', httpbin.url + '/status/301',
env=env, error_exit_ok=True)
assert '301 MOVED PERMANENTLY' in r
assert r.exit_status == ExitStatus.ERROR_HTTP_3XX
assert '301 moved permanently' in r.stderr.lower()
@pytest.mark.skipif(
requests.__version__ == '0.13.6',
reason='Redirects with prefetch=False are broken in Requests 0.13.6')
def test_3xx_check_status_redirects_allowed_exits_0(self, httpbin):
r = http('--check-status', '--follow',
'GET', httpbin.url + '/status/301',
error_exit_ok=True)
# The redirect will be followed so 200 is expected.
assert HTTP_OK in r
assert r.exit_status == ExitStatus.OK
def test_4xx_check_status_exits_4(self, httpbin):
r = http('--check-status', 'GET', httpbin.url + '/status/401',
error_exit_ok=True)
assert '401 UNAUTHORIZED' in r
assert r.exit_status == ExitStatus.ERROR_HTTP_4XX
# Also stderr should be empty since stdout isn't redirected.
assert not r.stderr
def test_5xx_check_status_exits_5(self, httpbin):
r = http('--check-status', 'GET', httpbin.url + '/status/500',
error_exit_ok=True)
assert '500 INTERNAL SERVER ERROR' in r
assert r.exit_status == ExitStatus.ERROR_HTTP_5XX

79
tests/test_httpie.py Normal file
View File

@ -0,0 +1,79 @@
"""High-level tests."""
import pytest
from utils import TestEnvironment, http, HTTP_OK
from fixtures import FILE_PATH, FILE_CONTENT
import httpie
from httpie.compat import is_py26
class TestHTTPie:
def test_debug(self):
r = http('--debug')
assert r.exit_status == httpie.ExitStatus.OK
assert 'HTTPie %s' % httpie.__version__ in r.stderr
assert 'HTTPie data:' in r.stderr
def test_help(self):
r = http('--help', error_exit_ok=True)
assert r.exit_status == httpie.ExitStatus.OK
assert 'https://github.com/jakubroztocil/httpie/issues' in r
def test_version(self):
r = http('--version', error_exit_ok=True)
assert r.exit_status == httpie.ExitStatus.OK
# FIXME: py3 has version in stdout, py2 in stderr
assert httpie.__version__ == r.stderr.strip() + r.strip()
def test_GET(self, httpbin):
r = http('GET', httpbin.url + '/get')
assert HTTP_OK in r
def test_DELETE(self, httpbin):
r = http('DELETE', httpbin.url + '/delete')
assert HTTP_OK in r
def test_PUT(self, httpbin):
r = http('PUT', httpbin.url + '/put', 'foo=bar')
assert HTTP_OK in r
assert r.json['json']['foo'] == 'bar'
def test_POST_JSON_data(self, httpbin):
r = http('POST', httpbin.url + '/post', 'foo=bar')
assert HTTP_OK in r
assert r.json['json']['foo'] == 'bar'
def test_POST_form(self, httpbin):
r = http('--form', 'POST', httpbin.url + '/post', 'foo=bar')
assert HTTP_OK in r
assert '"foo": "bar"' in r
def test_POST_form_multiple_values(self, httpbin):
r = http('--form', 'POST', httpbin.url + '/post', 'foo=bar', 'foo=baz')
assert HTTP_OK in r
assert r.json['form'] == {'foo': ['bar', 'baz']}
def test_POST_stdin(self, httpbin):
with open(FILE_PATH) as f:
env = TestEnvironment(stdin=f, stdin_isatty=False)
r = http('--form', 'POST', httpbin.url + '/post', env=env)
assert HTTP_OK in r
assert FILE_CONTENT in r
def test_headers(self, httpbin):
r = http('GET', httpbin.url + '/headers', 'Foo:bar')
assert HTTP_OK in r
assert '"User-Agent": "HTTPie' in r, r
assert '"Foo": "bar"' in r
@pytest.mark.skipif(
is_py26,
reason='the `object_pairs_hook` arg for `json.loads()` is Py>2.6 only'
)
def test_json_input_preserve_order(self, httpbin):
r = http('PATCH', httpbin.url + '/patch',
'order:={"map":{"1":"first","2":"second"}}')
assert HTTP_OK in r
assert r.json['data'] == \
'{"order": {"map": {"1": "first", "2": "second"}}}'

140
tests/test_output.py Normal file
View File

@ -0,0 +1,140 @@
import pytest
from httpie import ExitStatus
from httpie.output.formatters.colors import get_lexer
from utils import TestEnvironment, http, HTTP_OK, COLOR, CRLF
class TestVerboseFlag:
def test_verbose(self, httpbin):
r = http('--verbose',
'GET', httpbin.url + '/get', 'test-header:__test__')
assert HTTP_OK in r
assert r.count('__test__') == 2
def test_verbose_form(self, httpbin):
# https://github.com/jakubroztocil/httpie/issues/53
r = http('--verbose', '--form', 'POST', httpbin.url + '/post',
'A=B', 'C=D')
assert HTTP_OK in r
assert 'A=B&C=D' in r
def test_verbose_json(self, httpbin):
r = http('--verbose',
'POST', httpbin.url + '/post', 'foo=bar', 'baz=bar')
assert HTTP_OK in r
assert '"baz": "bar"' in r
class TestColors:
@pytest.mark.parametrize('mime', [
'application/json',
'application/json+foo',
'application/foo+json',
'application/json-foo',
'application/x-json',
'foo/json',
'foo/json+bar',
'foo/bar+json',
'foo/json-foo',
'foo/x-json',
])
def test_get_lexer(self, mime):
lexer = get_lexer(mime)
assert lexer is not None
assert lexer.name == 'JSON'
def test_get_lexer_not_found(self):
assert get_lexer('xxx/yyy') is None
class TestPrettyOptions:
"""Test the --pretty flag handling."""
def test_pretty_enabled_by_default(self, httpbin):
env = TestEnvironment(colors=256)
r = http('GET', httpbin.url + '/get', env=env)
assert COLOR in r
def test_pretty_enabled_by_default_unless_stdout_redirected(self, httpbin):
r = http('GET', httpbin.url + '/get')
assert COLOR not in r
def test_force_pretty(self, httpbin):
env = TestEnvironment(stdout_isatty=False, colors=256)
r = http('--pretty=all', 'GET', httpbin.url + '/get', env=env, )
assert COLOR in r
def test_force_ugly(self, httpbin):
r = http('--pretty=none', 'GET', httpbin.url + '/get')
assert COLOR not in r
def test_subtype_based_pygments_lexer_match(self, httpbin):
"""Test that media subtype is used if type/subtype doesn't
match any lexer.
"""
env = TestEnvironment(colors=256)
r = http('--print=B', '--pretty=all', httpbin.url + '/post',
'Content-Type:text/foo+json', 'a=b', env=env)
assert COLOR in r
def test_colors_option(self, httpbin):
env = TestEnvironment(colors=256)
r = http('--print=B', '--pretty=colors',
'GET', httpbin.url + '/get', 'a=b',
env=env)
# Tests that the JSON data isn't formatted.
assert not r.strip().count('\n')
assert COLOR in r
def test_format_option(self, httpbin):
env = TestEnvironment(colors=256)
r = http('--print=B', '--pretty=format',
'GET', httpbin.url + '/get', 'a=b',
env=env)
# Tests that the JSON data is formatted.
assert r.strip().count('\n') == 2
assert COLOR not in r
class TestLineEndings:
"""
Test that CRLF is properly used in headers
and as the headers/body separator.
"""
def _validate_crlf(self, msg):
lines = iter(msg.splitlines(True))
for header in lines:
if header == CRLF:
break
assert header.endswith(CRLF), repr(header)
else:
assert 0, 'CRLF between headers and body not found in %r' % msg
body = ''.join(lines)
assert CRLF not in body
return body
def test_CRLF_headers_only(self, httpbin):
r = http('--headers', 'GET', httpbin.url + '/get')
body = self._validate_crlf(r)
assert not body, 'Garbage after headers: %r' % r
def test_CRLF_ugly_response(self, httpbin):
r = http('--pretty=none', 'GET', httpbin.url + '/get')
self._validate_crlf(r)
def test_CRLF_formatted_response(self, httpbin):
r = http('--pretty=format', 'GET', httpbin.url + '/get')
assert r.exit_status == ExitStatus.OK
self._validate_crlf(r)
def test_CRLF_ugly_request(self, httpbin):
r = http('--pretty=none', '--print=HB', 'GET', httpbin.url + '/get')
self._validate_crlf(r)
def test_CRLF_formatted_request(self, httpbin):
r = http('--pretty=format', '--print=HB', 'GET', httpbin.url + '/get')
self._validate_crlf(r)

27
tests/test_regressions.py Normal file
View File

@ -0,0 +1,27 @@
"""Miscellaneous regression tests"""
import pytest
from utils import http, HTTP_OK
from httpie.compat import is_windows
def test_Host_header_overwrite(httpbin):
"""
https://github.com/jakubroztocil/httpie/issues/235
"""
host = 'httpbin.org'
url = httpbin.url + '/get'
r = http('--print=hH', url, 'host:{0}'.format(host))
assert HTTP_OK in r
assert r.lower().count('host:') == 1
assert 'host: {0}'.format(host) in r
@pytest.mark.skipif(is_windows, reason='Unix-only')
def test_output_devnull(httpbin):
"""
https://github.com/jakubroztocil/httpie/issues/252
"""
http('--output=/dev/null', httpbin + '/get')

164
tests/test_sessions.py Normal file
View File

@ -0,0 +1,164 @@
# coding=utf-8
import os
import shutil
from httpie.plugins.builtin import HTTPBasicAuth
from utils import TestEnvironment, mk_config_dir, http, HTTP_OK, \
no_content_type
from fixtures import UNICODE
class SessionTestBase(object):
def start_session(self, httpbin):
"""Create and reuse a unique config dir for each test."""
self.config_dir = mk_config_dir()
def teardown_method(self, method):
shutil.rmtree(self.config_dir)
def env(self):
"""
Return an environment.
Each environment created withing a test method
will share the same config_dir. It is necessary
for session files being reused.
"""
return TestEnvironment(config_dir=self.config_dir)
class TestSessionFlow(SessionTestBase):
"""
These tests start with an existing session created in `setup_method()`.
"""
def start_session(self, httpbin):
"""
Start a full-blown session with a custom request header,
authorization, and response cookies.
"""
super(TestSessionFlow, self).start_session(httpbin)
r1 = http('--follow', '--session=test', '--auth=username:password',
'GET', httpbin.url + '/cookies/set?hello=world',
'Hello:World',
env=self.env())
assert HTTP_OK in r1
def test_session_created_and_reused(self, httpbin):
self.start_session(httpbin)
# Verify that the session created in setup_method() has been used.
r2 = http('--session=test',
'GET', httpbin.url + '/get', env=self.env())
assert HTTP_OK in r2
assert r2.json['headers']['Hello'] == 'World'
assert r2.json['headers']['Cookie'] == 'hello=world'
assert 'Basic ' in r2.json['headers']['Authorization']
def test_session_update(self, httpbin):
self.start_session(httpbin)
# Get a response to a request from the original session.
r2 = http('--session=test', 'GET', httpbin.url + '/get', env=self.env())
assert HTTP_OK in r2
# Make a request modifying the session data.
r3 = http('--follow', '--session=test', '--auth=username:password2',
'GET', httpbin.url + '/cookies/set?hello=world2', 'Hello:World2',
env=self.env())
assert HTTP_OK in r3
# Get a response to a request from the updated session.
r4 = http('--session=test', 'GET', httpbin.url + '/get', env=self.env())
assert HTTP_OK in r4
assert r4.json['headers']['Hello'] == 'World2'
assert r4.json['headers']['Cookie'] == 'hello=world2'
assert (r2.json['headers']['Authorization'] !=
r4.json['headers']['Authorization'])
def test_session_read_only(self, httpbin):
self.start_session(httpbin)
# Get a response from the original session.
r2 = http('--session=test', 'GET', httpbin.url + '/get', env=self.env())
assert HTTP_OK in r2
# Make a request modifying the session data but
# with --session-read-only.
r3 = http('--follow', '--session-read-only=test',
'--auth=username:password2', 'GET',
httpbin.url + '/cookies/set?hello=world2', 'Hello:World2',
env=self.env())
assert HTTP_OK in r3
# Get a response from the updated session.
r4 = http('--session=test', 'GET', httpbin.url + '/get', env=self.env())
assert HTTP_OK in r4
# Origin can differ on Travis.
del r2.json['origin'], r4.json['origin']
# Different for each request.
# Should be the same as before r3.
assert r2.json == r4.json
class TestSession(SessionTestBase):
"""Stand-alone session tests."""
def test_session_ignored_header_prefixes(self, httpbin):
self.start_session(httpbin)
r1 = http('--session=test', 'GET', httpbin.url + '/get',
'Content-Type: text/plain',
'If-Unmodified-Since: Sat, 29 Oct 1994 19:43:31 GMT',
env=self.env())
assert HTTP_OK in r1
r2 = http('--session=test', 'GET', httpbin.url + '/get', env=self.env())
assert HTTP_OK in r2
assert no_content_type(r2.json['headers'])
assert 'If-Unmodified-Since' not in r2.json['headers']
def test_session_by_path(self, httpbin):
self.start_session(httpbin)
session_path = os.path.join(self.config_dir, 'session-by-path.json')
r1 = http('--session=' + session_path, 'GET', httpbin.url + '/get',
'Foo:Bar', env=self.env())
assert HTTP_OK in r1
r2 = http('--session=' + session_path, 'GET', httpbin.url + '/get',
env=self.env())
assert HTTP_OK in r2
assert r2.json['headers']['Foo'] == 'Bar'
def test_session_unicode(self, httpbin):
self.start_session(httpbin)
r1 = http('--session=test', u'--auth=test:' + UNICODE,
'GET', httpbin.url + '/get', u'Test:%s' % UNICODE,
env=self.env())
assert HTTP_OK in r1
r2 = http('--session=test', '--verbose', 'GET',
httpbin.url + '/get', env=self.env())
assert HTTP_OK in r2
# FIXME: Authorization *sometimes* is not present on Python3
assert (r2.json['headers']['Authorization']
== HTTPBasicAuth.make_header(u'test', UNICODE))
# httpbin doesn't interpret utf8 headers
assert UNICODE in r2
def test_session_default_header_value_overwritten(self, httpbin):
self.start_session(httpbin)
# https://github.com/jakubroztocil/httpie/issues/180
r1 = http('--session=test',
httpbin.url + '/headers', 'User-Agent:custom',
env=self.env())
assert HTTP_OK in r1
assert r1.json['headers']['User-Agent'] == 'custom'
r2 = http('--session=test', httpbin.url + '/headers', env=self.env())
assert HTTP_OK in r2
assert r2.json['headers']['User-Agent'] == 'custom'

79
tests/test_ssl.py Normal file
View File

@ -0,0 +1,79 @@
import os
import pytest
import pytest_httpbin.certs
from requests.exceptions import SSLError
from httpie import ExitStatus
from utils import http, HTTP_OK, TESTS_ROOT
CLIENT_CERT = os.path.join(TESTS_ROOT, 'client_certs', 'client.crt')
CLIENT_KEY = os.path.join(TESTS_ROOT, 'client_certs', 'client.key')
CLIENT_PEM = os.path.join(TESTS_ROOT, 'client_certs', 'client.pem')
# We test against a local httpbin instance which uses a self-signed cert.
# Requests without --verify=<CA_BUNDLE> will fail with a verification error.
# See: https://github.com/kevin1024/pytest-httpbin#https-support
CA_BUNDLE = pytest_httpbin.certs.where()
class TestClientSSLCertHandling(object):
def test_cert_file_not_found(self, httpbin_secure):
r = http(httpbin_secure + '/get',
'--verify', CA_BUNDLE,
'--cert', '/__not_found__',
error_exit_ok=True)
assert r.exit_status == ExitStatus.ERROR
assert 'No such file or directory' in r.stderr
def test_cert_file_invalid(self, httpbin_secure):
with pytest.raises(SSLError):
http(httpbin_secure + '/get',
'--verify', CA_BUNDLE,
'--cert', __file__)
def test_cert_ok_but_missing_key(self, httpbin_secure):
with pytest.raises(SSLError):
http(httpbin_secure + '/get',
'--verify', CA_BUNDLE,
'--cert', CLIENT_CERT)
def test_cert_and_key(self, httpbin_secure):
r = http(httpbin_secure + '/get',
'--verify', CA_BUNDLE,
'--cert', CLIENT_CERT,
'--cert-key', CLIENT_KEY)
assert HTTP_OK in r
def test_cert_pem(self, httpbin_secure):
r = http(httpbin_secure + '/get',
'--verify', CA_BUNDLE,
'--cert', CLIENT_PEM)
assert HTTP_OK in r
class TestServerSSLCertHandling(object):
def test_self_signed_server_cert_by_default_raises_ssl_error(
self, httpbin_secure):
with pytest.raises(SSLError):
http(httpbin_secure.url + '/get')
def test_verify_no_OK(self, httpbin_secure):
r = http(httpbin_secure.url + '/get', '--verify=no')
assert HTTP_OK in r
def test_verify_custom_ca_bundle_path(
self, httpbin_secure):
r = http(httpbin_secure.url + '/get', '--verify', CA_BUNDLE)
assert HTTP_OK in r
def test_verify_custom_ca_bundle_invalid_path(self, httpbin_secure):
with pytest.raises(SSLError):
http(httpbin_secure.url + '/get', '--verify', '/__not_found__')
def test_verify_custom_ca_bundle_invalid_bundle(self, httpbin_secure):
with pytest.raises(SSLError):
http(httpbin_secure.url + '/get', '--verify', __file__)

42
tests/test_stream.py Normal file
View File

@ -0,0 +1,42 @@
import pytest
from httpie.compat import is_windows
from httpie.output.streams import BINARY_SUPPRESSED_NOTICE
from utils import http, TestEnvironment
from fixtures import BIN_FILE_CONTENT, BIN_FILE_PATH
class TestStream:
# GET because httpbin 500s with binary POST body.
@pytest.mark.skipif(is_windows,
reason='Pretty redirect not supported under Windows')
def test_pretty_redirected_stream(self, httpbin):
"""Test that --stream works with prettified redirected output."""
with open(BIN_FILE_PATH, 'rb') as f:
env = TestEnvironment(colors=256, stdin=f,
stdin_isatty=False,
stdout_isatty=False)
r = http('--verbose', '--pretty=all', '--stream', 'GET',
httpbin.url + '/get', env=env)
assert BINARY_SUPPRESSED_NOTICE.decode() in r
def test_encoded_stream(self, httpbin):
"""Test that --stream works with non-prettified
redirected terminal output."""
with open(BIN_FILE_PATH, 'rb') as f:
env = TestEnvironment(stdin=f, stdin_isatty=False)
r = http('--pretty=none', '--stream', '--verbose', 'GET',
httpbin.url + '/get', env=env)
assert BINARY_SUPPRESSED_NOTICE.decode() in r
def test_redirected_stream(self, httpbin):
"""Test that --stream works with non-prettified
redirected terminal output."""
with open(BIN_FILE_PATH, 'rb') as f:
env = TestEnvironment(stdout_isatty=False,
stdin_isatty=False,
stdin=f)
r = http('--pretty=none', '--stream', '--verbose', 'GET',
httpbin.url + '/get', env=env)
assert BIN_FILE_CONTENT in r

87
tests/test_unicode.py Normal file
View File

@ -0,0 +1,87 @@
# coding=utf-8
"""
Various unicode handling related tests.
"""
from utils import http, HTTP_OK
from fixtures import UNICODE
class TestUnicode:
def test_unicode_headers(self, httpbin):
# httpbin doesn't interpret utf8 headers
r = http(httpbin.url + '/headers', u'Test:%s' % UNICODE)
assert HTTP_OK in r
def test_unicode_headers_verbose(self, httpbin):
# httpbin doesn't interpret utf8 headers
r = http('--verbose', httpbin.url + '/headers', u'Test:%s' % UNICODE)
assert HTTP_OK in r
assert UNICODE in r
def test_unicode_form_item(self, httpbin):
r = http('--form', 'POST', httpbin.url + '/post', u'test=%s' % UNICODE)
assert HTTP_OK in r
assert r.json['form'] == {'test': UNICODE}
def test_unicode_form_item_verbose(self, httpbin):
r = http('--verbose', '--form',
'POST', httpbin.url + '/post', u'test=%s' % UNICODE)
assert HTTP_OK in r
assert UNICODE in r
def test_unicode_json_item(self, httpbin):
r = http('--json', 'POST', httpbin.url + '/post', u'test=%s' % UNICODE)
assert HTTP_OK in r
assert r.json['json'] == {'test': UNICODE}
def test_unicode_json_item_verbose(self, httpbin):
r = http('--verbose', '--json',
'POST', httpbin.url + '/post', u'test=%s' % UNICODE)
assert HTTP_OK in r
assert UNICODE in r
def test_unicode_raw_json_item(self, httpbin):
r = http('--json', 'POST', httpbin.url + '/post',
u'test:={ "%s" : [ "%s" ] }' % (UNICODE, UNICODE))
assert HTTP_OK in r
assert r.json['json'] == {'test': {UNICODE: [UNICODE]}}
def test_unicode_raw_json_item_verbose(self, httpbin):
r = http('--json', 'POST', httpbin.url + '/post',
u'test:={ "%s" : [ "%s" ] }' % (UNICODE, UNICODE))
assert HTTP_OK in r
assert r.json['json'] == {'test': {UNICODE: [UNICODE]}}
def test_unicode_url_query_arg_item(self, httpbin):
r = http(httpbin.url + '/get', u'test==%s' % UNICODE)
assert HTTP_OK in r
assert r.json['args'] == {'test': UNICODE}, r
def test_unicode_url_query_arg_item_verbose(self, httpbin):
r = http('--verbose', httpbin.url + '/get', u'test==%s' % UNICODE)
assert HTTP_OK in r
assert UNICODE in r
def test_unicode_url(self, httpbin):
r = http(httpbin.url + u'/get?test=' + UNICODE)
assert HTTP_OK in r
assert r.json['args'] == {'test': UNICODE}
# def test_unicode_url_verbose(self):
# r = http(httpbin.url + '--verbose', u'/get?test=' + UNICODE)
# assert HTTP_OK in r
def test_unicode_basic_auth(self, httpbin):
# it doesn't really authenticate us because httpbin
# doesn't interpret the utf8-encoded auth
http('--verbose', '--auth', u'test:%s' % UNICODE,
httpbin.url + u'/basic-auth/test/' + UNICODE)
def test_unicode_digest_auth(self, httpbin):
# it doesn't really authenticate us because httpbin
# doesn't interpret the utf8-encoded auth
http('--auth-type=digest',
'--auth', u'test:%s' % UNICODE,
httpbin.url + u'/digest-auth/auth/test/' + UNICODE)

73
tests/test_uploads.py Normal file
View File

@ -0,0 +1,73 @@
import os
import pytest
from httpie.input import ParseError
from utils import TestEnvironment, http, HTTP_OK
from fixtures import FILE_PATH_ARG, FILE_PATH, FILE_CONTENT
class TestMultipartFormDataFileUpload:
def test_non_existent_file_raises_parse_error(self, httpbin):
with pytest.raises(ParseError):
http('--form',
'POST', httpbin.url + '/post', 'foo@/__does_not_exist__')
def test_upload_ok(self, httpbin):
r = http('--form', '--verbose', 'POST', httpbin.url + '/post',
'test-file@%s' % FILE_PATH_ARG, 'foo=bar')
assert HTTP_OK in r
assert 'Content-Disposition: form-data; name="foo"' in r
assert 'Content-Disposition: form-data; name="test-file";' \
' filename="%s"' % os.path.basename(FILE_PATH) in r
assert FILE_CONTENT in r
assert '"foo": "bar"' in r
def test_upload_multiple_fields_with_the_same_name(self, httpbin):
r = http('--form', '--verbose', 'POST', httpbin.url + '/post',
'test-file@%s' % FILE_PATH_ARG,
'test-file@%s' % FILE_PATH_ARG)
assert HTTP_OK in r
assert r.count('Content-Disposition: form-data; name="test-file";'
' filename="%s"' % os.path.basename(FILE_PATH)) == 2
# Should be 4, but is 3 because httpbin
# doesn't seem to support filed field lists
assert r.count(FILE_CONTENT) in [3, 4]
class TestRequestBodyFromFilePath:
"""
`http URL @file'
"""
def test_request_body_from_file_by_path(self, httpbin):
r = http('--verbose',
'POST', httpbin.url + '/post', '@' + FILE_PATH_ARG)
assert HTTP_OK in r
assert FILE_CONTENT in r, r
assert '"Content-Type": "text/plain"' in r
def test_request_body_from_file_by_path_with_explicit_content_type(
self, httpbin):
r = http('--verbose',
'POST', httpbin.url + '/post', '@' + FILE_PATH_ARG,
'Content-Type:text/plain; charset=utf8')
assert HTTP_OK in r
assert FILE_CONTENT in r
assert 'Content-Type: text/plain; charset=utf8' in r
def test_request_body_from_file_by_path_no_field_name_allowed(
self, httpbin):
env = TestEnvironment(stdin_isatty=True)
r = http('POST', httpbin.url + '/post', 'field-name@' + FILE_PATH_ARG,
env=env, error_exit_ok=True)
assert 'perhaps you meant --form?' in r.stderr
def test_request_body_from_file_by_path_no_data_items_allowed(
self, httpbin):
env = TestEnvironment(stdin_isatty=False)
r = http('POST', httpbin.url + '/post', '@' + FILE_PATH_ARG, 'foo=bar',
env=env, error_exit_ok=True)
assert 'cannot be mixed' in r.stderr

29
tests/test_windows.py Normal file
View File

@ -0,0 +1,29 @@
import os
import tempfile
import pytest
from httpie.context import Environment
from utils import TestEnvironment, http
from httpie.compat import is_windows
@pytest.mark.skipif(not is_windows, reason='windows-only')
class TestWindowsOnly:
@pytest.mark.skipif(True,
reason='this test for some reason kills the process')
def test_windows_colorized_output(self, httpbin):
# Spits out the colorized output.
http(httpbin.url + '/get', env=Environment())
class TestFakeWindows:
def test_output_file_pretty_not_allowed_on_windows(self, httpbin):
env = TestEnvironment(is_windows=True)
output_file = os.path.join(
tempfile.gettempdir(), '__httpie_test_output__')
r = http('--output', output_file,
'--pretty=all', 'GET', httpbin.url + '/get',
env=env, error_exit_ok=True)
assert 'Only terminal output can be colorized on Windows' in r.stderr

File diff suppressed because it is too large Load Diff

241
tests/utils.py Normal file
View File

@ -0,0 +1,241 @@
# coding=utf-8
"""Utilities used by HTTPie tests.
"""
import os
import sys
import time
import json
import shutil
import tempfile
import httpie
from httpie.context import Environment
from httpie.core import main
from httpie.compat import bytes, str
TESTS_ROOT = os.path.abspath(os.path.dirname(__file__))
CRLF = '\r\n'
COLOR = '\x1b['
HTTP_OK = '200 OK'
HTTP_OK_COLOR = (
'HTTP\x1b[39m\x1b[38;5;245m/\x1b[39m\x1b'
'[38;5;37m1.1\x1b[39m\x1b[38;5;245m \x1b[39m\x1b[38;5;37m200'
'\x1b[39m\x1b[38;5;245m \x1b[39m\x1b[38;5;136mOK'
)
def no_content_type(headers):
return (
'Content-Type' not in headers
# We need to do also this because of this issue:
# <https://github.com/kevin1024/pytest-httpbin/issues/5>
# TODO: remove this function once the issue is if fixed
or headers['Content-Type'] == 'text/plain'
)
def add_auth(url, auth):
proto, rest = url.split('://', 1)
return proto + '://' + auth + '@' + rest
class TestEnvironment(Environment):
"""
Environment subclass with reasonable defaults suitable for testing.
"""
colors = 0
stdin_isatty = True,
stdout_isatty = True
is_windows = False
_shutil = shutil # needed by __del__ (would get gc'd)
def __init__(self, **kwargs):
if 'stdout' not in kwargs:
kwargs['stdout'] = tempfile.TemporaryFile('w+b')
if 'stderr' not in kwargs:
kwargs['stderr'] = tempfile.TemporaryFile('w+t')
self.delete_config_dir = False
if 'config_dir' not in kwargs:
kwargs['config_dir'] = mk_config_dir()
self.delete_config_dir = True
super(TestEnvironment, self).__init__(**kwargs)
def __del__(self):
if self.delete_config_dir:
self._shutil.rmtree(self.config_dir)
def http(*args, **kwargs):
"""
Run HTTPie and capture stderr/out and exit status.
Invoke `httpie.core.main()` with `args` and `kwargs`,
and return a `CLIResponse` subclass instance.
The return value is either a `StrCLIResponse`, or `BytesCLIResponse`
if unable to decode the output.
The response has the following attributes:
`stdout` is represented by the instance itself (print r)
`stderr`: text written to stderr
`exit_status`: the exit status
`json`: decoded JSON (if possible) or `None`
Exceptions are propagated.
If you pass ``error_exit_ok=True``, then error exit statuses
won't result into an exception.
Example:
$ http --auth=user:password GET httpbin.org/basic-auth/user/password
>>> r = http('-a', 'user:pw', 'httpbin.org/basic-auth/user/pw')
>>> type(r) == StrCLIResponse
True
>>> r.exit_status
0
>>> r.stderr
''
>>> 'HTTP/1.1 200 OK' in r
True
>>> r.json == {'authenticated': True, 'user': 'user'}
True
"""
error_exit_ok = kwargs.pop('error_exit_ok', False)
env = kwargs.get('env')
if not env:
env = kwargs['env'] = TestEnvironment()
stdout = env.stdout
stderr = env.stderr
args = list(args)
if '--debug' not in args and '--traceback' not in args:
args = ['--traceback'] + args
def dump_stderr():
stderr.seek(0)
sys.stderr.write(stderr.read())
try:
try:
exit_status = main(args=args, **kwargs)
if '--download' in args:
# Let the progress reporter thread finish.
time.sleep(.5)
except SystemExit:
if error_exit_ok:
exit_status = httpie.ExitStatus.ERROR
else:
dump_stderr()
raise
except Exception:
stderr.seek(0)
sys.stderr.write(stderr.read())
raise
else:
if exit_status != httpie.ExitStatus.OK and not error_exit_ok:
dump_stderr()
raise Exception('Unexpected exit status: %s', exit_status)
stdout.seek(0)
stderr.seek(0)
output = stdout.read()
try:
output = output.decode('utf8')
except UnicodeDecodeError:
# noinspection PyArgumentList
r = BytesCLIResponse(output)
else:
# noinspection PyArgumentList
r = StrCLIResponse(output)
r.stderr = stderr.read()
r.exit_status = exit_status
if r.exit_status != httpie.ExitStatus.OK:
sys.stderr.write(r.stderr)
return r
finally:
stdout.close()
stderr.close()
class BaseCLIResponse(object):
"""
Represents the result of simulated `$ http' invocation via `http()`.
Holds and provides access to:
- stdout output: print(self)
- stderr output: print(self.stderr)
- exit_status output: print(self.exit_status)
"""
stderr = None
json = None
exit_status = None
class BytesCLIResponse(bytes, BaseCLIResponse):
"""
Used as a fallback when a StrCLIResponse cannot be used.
E.g. when the output contains binary data or when it is colorized.
`.json` will always be None.
"""
class StrCLIResponse(str, BaseCLIResponse):
@property
def json(self):
"""
Return deserialized JSON body, if one included in the output
and is parseable.
"""
if not hasattr(self, '_json'):
self._json = None
# De-serialize JSON body if possible.
if COLOR in self:
# Colorized output cannot be parsed.
pass
elif self.strip().startswith('{'):
# Looks like JSON body.
self._json = json.loads(self)
elif (self.count('Content-Type:') == 1
and 'application/json' in self):
# Looks like a whole JSON HTTP message,
# try to extract its body.
try:
j = self.strip()[self.strip().rindex('\r\n\r\n'):]
except ValueError:
pass
else:
try:
self._json = json.loads(j)
except ValueError:
pass
return self._json
def mk_config_dir():
return tempfile.mkdtemp(prefix='httpie_test_config_dir_')

26
tox.ini
View File

@ -1,19 +1,21 @@
# Tox (http://tox.testrun.org/) is a tool for running tests
# in multiple virtualenvs. This configuration file will run the
# test suite on all supported python versions. To use it, "pip install tox"
# and then run "tox" from this directory.
# in multiple virtualenvs.
# Run:
# $ pip install -r requirements-dev.txt
# $ tox
[tox]
envlist = py26, py27, py32, pypy
envlist = py26, py27, py34, pypy
[testenv]
commands = {envpython} setup.py test
deps =
pytest
pytest-httpbin
[testenv:py26]
deps = argparse
commands =
py.test --verbose --doctest-modules --basetemp={envtmpdir} {posargs:./tests ./httpie}
[testenv:py30]
deps = argparse
[testenv:py31]
deps = argparse
[pytest]
addopts = --tb=native